Feb  2 04:00:42 np0005604790 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Feb  2 04:00:42 np0005604790 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb  2 04:00:42 np0005604790 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 04:00:42 np0005604790 kernel: BIOS-provided physical RAM map:
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  2 04:00:42 np0005604790 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb  2 04:00:42 np0005604790 kernel: NX (Execute Disable) protection: active
Feb  2 04:00:42 np0005604790 kernel: APIC: Static calls initialized
Feb  2 04:00:42 np0005604790 kernel: SMBIOS 2.8 present.
Feb  2 04:00:42 np0005604790 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb  2 04:00:42 np0005604790 kernel: Hypervisor detected: KVM
Feb  2 04:00:42 np0005604790 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  2 04:00:42 np0005604790 kernel: kvm-clock: using sched offset of 5187000370 cycles
Feb  2 04:00:42 np0005604790 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  2 04:00:42 np0005604790 kernel: tsc: Detected 2800.000 MHz processor
Feb  2 04:00:42 np0005604790 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb  2 04:00:42 np0005604790 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb  2 04:00:42 np0005604790 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  2 04:00:42 np0005604790 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb  2 04:00:42 np0005604790 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb  2 04:00:42 np0005604790 kernel: Using GB pages for direct mapping
Feb  2 04:00:42 np0005604790 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Feb  2 04:00:42 np0005604790 kernel: ACPI: Early table checksum verification disabled
Feb  2 04:00:42 np0005604790 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb  2 04:00:42 np0005604790 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 04:00:42 np0005604790 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 04:00:42 np0005604790 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 04:00:42 np0005604790 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb  2 04:00:42 np0005604790 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 04:00:42 np0005604790 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  2 04:00:42 np0005604790 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb  2 04:00:42 np0005604790 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb  2 04:00:42 np0005604790 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb  2 04:00:42 np0005604790 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb  2 04:00:42 np0005604790 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb  2 04:00:42 np0005604790 kernel: No NUMA configuration found
Feb  2 04:00:42 np0005604790 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb  2 04:00:42 np0005604790 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Feb  2 04:00:42 np0005604790 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb  2 04:00:42 np0005604790 kernel: Zone ranges:
Feb  2 04:00:42 np0005604790 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  2 04:00:42 np0005604790 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb  2 04:00:42 np0005604790 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 04:00:42 np0005604790 kernel:  Device   empty
Feb  2 04:00:42 np0005604790 kernel: Movable zone start for each node
Feb  2 04:00:42 np0005604790 kernel: Early memory node ranges
Feb  2 04:00:42 np0005604790 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  2 04:00:42 np0005604790 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb  2 04:00:42 np0005604790 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb  2 04:00:42 np0005604790 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb  2 04:00:42 np0005604790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  2 04:00:42 np0005604790 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  2 04:00:42 np0005604790 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb  2 04:00:42 np0005604790 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  2 04:00:42 np0005604790 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  2 04:00:42 np0005604790 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  2 04:00:42 np0005604790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  2 04:00:42 np0005604790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  2 04:00:42 np0005604790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  2 04:00:42 np0005604790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  2 04:00:42 np0005604790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  2 04:00:42 np0005604790 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  2 04:00:42 np0005604790 kernel: TSC deadline timer available
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Max. logical packages:   8
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Max. logical dies:       8
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Max. dies per package:   1
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Max. threads per core:   1
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Num. cores per package:     1
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Num. threads per package:   1
Feb  2 04:00:42 np0005604790 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb  2 04:00:42 np0005604790 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb  2 04:00:42 np0005604790 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb  2 04:00:42 np0005604790 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb  2 04:00:42 np0005604790 kernel: Booting paravirtualized kernel on KVM
Feb  2 04:00:42 np0005604790 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  2 04:00:42 np0005604790 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb  2 04:00:42 np0005604790 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb  2 04:00:42 np0005604790 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  2 04:00:42 np0005604790 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 04:00:42 np0005604790 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Feb  2 04:00:42 np0005604790 kernel: random: crng init done
Feb  2 04:00:42 np0005604790 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: Fallback order for Node 0: 0 
Feb  2 04:00:42 np0005604790 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb  2 04:00:42 np0005604790 kernel: Policy zone: Normal
Feb  2 04:00:42 np0005604790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  2 04:00:42 np0005604790 kernel: software IO TLB: area num 8.
Feb  2 04:00:42 np0005604790 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb  2 04:00:42 np0005604790 kernel: ftrace: allocating 49438 entries in 194 pages
Feb  2 04:00:42 np0005604790 kernel: ftrace: allocated 194 pages with 3 groups
Feb  2 04:00:42 np0005604790 kernel: Dynamic Preempt: voluntary
Feb  2 04:00:42 np0005604790 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb  2 04:00:42 np0005604790 kernel: rcu: #011RCU event tracing is enabled.
Feb  2 04:00:42 np0005604790 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb  2 04:00:42 np0005604790 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb  2 04:00:42 np0005604790 kernel: #011Rude variant of Tasks RCU enabled.
Feb  2 04:00:42 np0005604790 kernel: #011Tracing variant of Tasks RCU enabled.
Feb  2 04:00:42 np0005604790 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  2 04:00:42 np0005604790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb  2 04:00:42 np0005604790 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 04:00:42 np0005604790 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 04:00:42 np0005604790 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb  2 04:00:42 np0005604790 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb  2 04:00:42 np0005604790 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb  2 04:00:42 np0005604790 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb  2 04:00:42 np0005604790 kernel: Console: colour VGA+ 80x25
Feb  2 04:00:42 np0005604790 kernel: printk: console [ttyS0] enabled
Feb  2 04:00:42 np0005604790 kernel: ACPI: Core revision 20230331
Feb  2 04:00:42 np0005604790 kernel: APIC: Switch to symmetric I/O mode setup
Feb  2 04:00:42 np0005604790 kernel: x2apic enabled
Feb  2 04:00:42 np0005604790 kernel: APIC: Switched APIC routing to: physical x2apic
Feb  2 04:00:42 np0005604790 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  2 04:00:42 np0005604790 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb  2 04:00:42 np0005604790 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  2 04:00:42 np0005604790 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  2 04:00:42 np0005604790 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  2 04:00:42 np0005604790 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb  2 04:00:42 np0005604790 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb  2 04:00:42 np0005604790 kernel: Spectre V2 : Mitigation: Retpolines
Feb  2 04:00:42 np0005604790 kernel: RETBleed: Mitigation: untrained return thunk
Feb  2 04:00:42 np0005604790 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb  2 04:00:42 np0005604790 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  2 04:00:42 np0005604790 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb  2 04:00:42 np0005604790 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  2 04:00:42 np0005604790 kernel: active return thunk: retbleed_return_thunk
Feb  2 04:00:42 np0005604790 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  2 04:00:42 np0005604790 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  2 04:00:42 np0005604790 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  2 04:00:42 np0005604790 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  2 04:00:42 np0005604790 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  2 04:00:42 np0005604790 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb  2 04:00:42 np0005604790 kernel: Freeing SMP alternatives memory: 40K
Feb  2 04:00:42 np0005604790 kernel: pid_max: default: 32768 minimum: 301
Feb  2 04:00:42 np0005604790 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb  2 04:00:42 np0005604790 kernel: landlock: Up and running.
Feb  2 04:00:42 np0005604790 kernel: Yama: becoming mindful.
Feb  2 04:00:42 np0005604790 kernel: SELinux:  Initializing.
Feb  2 04:00:42 np0005604790 kernel: LSM support for eBPF active
Feb  2 04:00:42 np0005604790 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  2 04:00:42 np0005604790 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  2 04:00:42 np0005604790 kernel: ... version:                0
Feb  2 04:00:42 np0005604790 kernel: ... bit width:              48
Feb  2 04:00:42 np0005604790 kernel: ... generic registers:      6
Feb  2 04:00:42 np0005604790 kernel: ... value mask:             0000ffffffffffff
Feb  2 04:00:42 np0005604790 kernel: ... max period:             00007fffffffffff
Feb  2 04:00:42 np0005604790 kernel: ... fixed-purpose events:   0
Feb  2 04:00:42 np0005604790 kernel: ... event mask:             000000000000003f
Feb  2 04:00:42 np0005604790 kernel: signal: max sigframe size: 1776
Feb  2 04:00:42 np0005604790 kernel: rcu: Hierarchical SRCU implementation.
Feb  2 04:00:42 np0005604790 kernel: rcu: #011Max phase no-delay instances is 400.
Feb  2 04:00:42 np0005604790 kernel: smp: Bringing up secondary CPUs ...
Feb  2 04:00:42 np0005604790 kernel: smpboot: x86: Booting SMP configuration:
Feb  2 04:00:42 np0005604790 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb  2 04:00:42 np0005604790 kernel: smp: Brought up 1 node, 8 CPUs
Feb  2 04:00:42 np0005604790 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb  2 04:00:42 np0005604790 kernel: node 0 deferred pages initialised in 8ms
Feb  2 04:00:42 np0005604790 kernel: Memory: 7763720K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Feb  2 04:00:42 np0005604790 kernel: devtmpfs: initialized
Feb  2 04:00:42 np0005604790 kernel: x86/mm: Memory block size: 128MB
Feb  2 04:00:42 np0005604790 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  2 04:00:42 np0005604790 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb  2 04:00:42 np0005604790 kernel: pinctrl core: initialized pinctrl subsystem
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  2 04:00:42 np0005604790 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb  2 04:00:42 np0005604790 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb  2 04:00:42 np0005604790 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb  2 04:00:42 np0005604790 kernel: audit: initializing netlink subsys (disabled)
Feb  2 04:00:42 np0005604790 kernel: audit: type=2000 audit(1770022840.838:1): state=initialized audit_enabled=0 res=1
Feb  2 04:00:42 np0005604790 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb  2 04:00:42 np0005604790 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  2 04:00:42 np0005604790 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  2 04:00:42 np0005604790 kernel: cpuidle: using governor menu
Feb  2 04:00:42 np0005604790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  2 04:00:42 np0005604790 kernel: PCI: Using configuration type 1 for base access
Feb  2 04:00:42 np0005604790 kernel: PCI: Using configuration type 1 for extended access
Feb  2 04:00:42 np0005604790 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  2 04:00:42 np0005604790 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb  2 04:00:42 np0005604790 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb  2 04:00:42 np0005604790 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb  2 04:00:42 np0005604790 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb  2 04:00:42 np0005604790 kernel: Demotion targets for Node 0: null
Feb  2 04:00:42 np0005604790 kernel: cryptd: max_cpu_qlen set to 1000
Feb  2 04:00:42 np0005604790 kernel: ACPI: Added _OSI(Module Device)
Feb  2 04:00:42 np0005604790 kernel: ACPI: Added _OSI(Processor Device)
Feb  2 04:00:42 np0005604790 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  2 04:00:42 np0005604790 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  2 04:00:42 np0005604790 kernel: ACPI: Interpreter enabled
Feb  2 04:00:42 np0005604790 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb  2 04:00:42 np0005604790 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  2 04:00:42 np0005604790 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  2 04:00:42 np0005604790 kernel: PCI: Using E820 reservations for host bridge windows
Feb  2 04:00:42 np0005604790 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  2 04:00:42 np0005604790 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [3] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [4] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [5] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [6] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [7] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [8] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [9] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [10] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [11] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [12] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [13] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [14] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [15] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [16] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [17] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [18] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [19] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [20] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [21] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [22] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [23] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [24] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [25] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [26] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [27] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [28] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [29] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [30] registered
Feb  2 04:00:42 np0005604790 kernel: acpiphp: Slot [31] registered
Feb  2 04:00:42 np0005604790 kernel: PCI host bridge to bus 0000:00
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  2 04:00:42 np0005604790 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  2 04:00:42 np0005604790 kernel: iommu: Default domain type: Translated
Feb  2 04:00:42 np0005604790 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb  2 04:00:42 np0005604790 kernel: SCSI subsystem initialized
Feb  2 04:00:42 np0005604790 kernel: ACPI: bus type USB registered
Feb  2 04:00:42 np0005604790 kernel: usbcore: registered new interface driver usbfs
Feb  2 04:00:42 np0005604790 kernel: usbcore: registered new interface driver hub
Feb  2 04:00:42 np0005604790 kernel: usbcore: registered new device driver usb
Feb  2 04:00:42 np0005604790 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  2 04:00:42 np0005604790 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  2 04:00:42 np0005604790 kernel: PTP clock support registered
Feb  2 04:00:42 np0005604790 kernel: EDAC MC: Ver: 3.0.0
Feb  2 04:00:42 np0005604790 kernel: NetLabel: Initializing
Feb  2 04:00:42 np0005604790 kernel: NetLabel:  domain hash size = 128
Feb  2 04:00:42 np0005604790 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb  2 04:00:42 np0005604790 kernel: NetLabel:  unlabeled traffic allowed by default
Feb  2 04:00:42 np0005604790 kernel: PCI: Using ACPI for IRQ routing
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  2 04:00:42 np0005604790 kernel: vgaarb: loaded
Feb  2 04:00:42 np0005604790 kernel: clocksource: Switched to clocksource kvm-clock
Feb  2 04:00:42 np0005604790 kernel: VFS: Disk quotas dquot_6.6.0
Feb  2 04:00:42 np0005604790 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  2 04:00:42 np0005604790 kernel: pnp: PnP ACPI init
Feb  2 04:00:42 np0005604790 kernel: pnp: PnP ACPI: found 5 devices
Feb  2 04:00:42 np0005604790 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_INET protocol family
Feb  2 04:00:42 np0005604790 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb  2 04:00:42 np0005604790 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_XDP protocol family
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb  2 04:00:42 np0005604790 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  2 04:00:42 np0005604790 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  2 04:00:42 np0005604790 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 22763 usecs
Feb  2 04:00:42 np0005604790 kernel: PCI: CLS 0 bytes, default 64
Feb  2 04:00:42 np0005604790 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb  2 04:00:42 np0005604790 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb  2 04:00:42 np0005604790 kernel: ACPI: bus type thunderbolt registered
Feb  2 04:00:42 np0005604790 kernel: Trying to unpack rootfs image as initramfs...
Feb  2 04:00:42 np0005604790 kernel: Initialise system trusted keyrings
Feb  2 04:00:42 np0005604790 kernel: Key type blacklist registered
Feb  2 04:00:42 np0005604790 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb  2 04:00:42 np0005604790 kernel: zbud: loaded
Feb  2 04:00:42 np0005604790 kernel: integrity: Platform Keyring initialized
Feb  2 04:00:42 np0005604790 kernel: integrity: Machine keyring initialized
Feb  2 04:00:42 np0005604790 kernel: Freeing initrd memory: 88000K
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_ALG protocol family
Feb  2 04:00:42 np0005604790 kernel: xor: automatically using best checksumming function   avx       
Feb  2 04:00:42 np0005604790 kernel: Key type asymmetric registered
Feb  2 04:00:42 np0005604790 kernel: Asymmetric key parser 'x509' registered
Feb  2 04:00:42 np0005604790 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb  2 04:00:42 np0005604790 kernel: io scheduler mq-deadline registered
Feb  2 04:00:42 np0005604790 kernel: io scheduler kyber registered
Feb  2 04:00:42 np0005604790 kernel: io scheduler bfq registered
Feb  2 04:00:42 np0005604790 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb  2 04:00:42 np0005604790 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb  2 04:00:42 np0005604790 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb  2 04:00:42 np0005604790 kernel: ACPI: button: Power Button [PWRF]
Feb  2 04:00:42 np0005604790 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  2 04:00:42 np0005604790 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  2 04:00:42 np0005604790 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  2 04:00:42 np0005604790 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  2 04:00:42 np0005604790 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  2 04:00:42 np0005604790 kernel: Non-volatile memory driver v1.3
Feb  2 04:00:42 np0005604790 kernel: rdac: device handler registered
Feb  2 04:00:42 np0005604790 kernel: hp_sw: device handler registered
Feb  2 04:00:42 np0005604790 kernel: emc: device handler registered
Feb  2 04:00:42 np0005604790 kernel: alua: device handler registered
Feb  2 04:00:42 np0005604790 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb  2 04:00:42 np0005604790 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb  2 04:00:42 np0005604790 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb  2 04:00:42 np0005604790 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb  2 04:00:42 np0005604790 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb  2 04:00:42 np0005604790 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb  2 04:00:42 np0005604790 kernel: usb usb1: Product: UHCI Host Controller
Feb  2 04:00:42 np0005604790 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Feb  2 04:00:42 np0005604790 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb  2 04:00:42 np0005604790 kernel: hub 1-0:1.0: USB hub found
Feb  2 04:00:42 np0005604790 kernel: hub 1-0:1.0: 2 ports detected
Feb  2 04:00:42 np0005604790 kernel: usbcore: registered new interface driver usbserial_generic
Feb  2 04:00:42 np0005604790 kernel: usbserial: USB Serial support registered for generic
Feb  2 04:00:42 np0005604790 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  2 04:00:42 np0005604790 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  2 04:00:42 np0005604790 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  2 04:00:42 np0005604790 kernel: mousedev: PS/2 mouse device common for all mice
Feb  2 04:00:42 np0005604790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb  2 04:00:42 np0005604790 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb  2 04:00:42 np0005604790 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb  2 04:00:42 np0005604790 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb  2 04:00:42 np0005604790 kernel: rtc_cmos 00:04: registered as rtc0
Feb  2 04:00:42 np0005604790 kernel: rtc_cmos 00:04: setting system clock to 2026-02-02T09:00:41 UTC (1770022841)
Feb  2 04:00:42 np0005604790 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb  2 04:00:42 np0005604790 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb  2 04:00:42 np0005604790 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  2 04:00:42 np0005604790 kernel: usbcore: registered new interface driver usbhid
Feb  2 04:00:42 np0005604790 kernel: usbhid: USB HID core driver
Feb  2 04:00:42 np0005604790 kernel: drop_monitor: Initializing network drop monitor service
Feb  2 04:00:42 np0005604790 kernel: Initializing XFRM netlink socket
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_INET6 protocol family
Feb  2 04:00:42 np0005604790 kernel: Segment Routing with IPv6
Feb  2 04:00:42 np0005604790 kernel: NET: Registered PF_PACKET protocol family
Feb  2 04:00:42 np0005604790 kernel: mpls_gso: MPLS GSO support
Feb  2 04:00:42 np0005604790 kernel: IPI shorthand broadcast: enabled
Feb  2 04:00:42 np0005604790 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  2 04:00:42 np0005604790 kernel: AES CTR mode by8 optimization enabled
Feb  2 04:00:42 np0005604790 kernel: sched_clock: Marking stable (891003010, 147004240)->(1102410230, -64402980)
Feb  2 04:00:42 np0005604790 kernel: registered taskstats version 1
Feb  2 04:00:42 np0005604790 kernel: Loading compiled-in X.509 certificates
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb  2 04:00:42 np0005604790 kernel: Demotion targets for Node 0: null
Feb  2 04:00:42 np0005604790 kernel: page_owner is disabled
Feb  2 04:00:42 np0005604790 kernel: Key type .fscrypt registered
Feb  2 04:00:42 np0005604790 kernel: Key type fscrypt-provisioning registered
Feb  2 04:00:42 np0005604790 kernel: Key type big_key registered
Feb  2 04:00:42 np0005604790 kernel: Key type encrypted registered
Feb  2 04:00:42 np0005604790 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  2 04:00:42 np0005604790 kernel: Loading compiled-in module X.509 certificates
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Feb  2 04:00:42 np0005604790 kernel: ima: Allocated hash algorithm: sha256
Feb  2 04:00:42 np0005604790 kernel: ima: No architecture policies found
Feb  2 04:00:42 np0005604790 kernel: evm: Initialising EVM extended attributes:
Feb  2 04:00:42 np0005604790 kernel: evm: security.selinux
Feb  2 04:00:42 np0005604790 kernel: evm: security.SMACK64 (disabled)
Feb  2 04:00:42 np0005604790 kernel: evm: security.SMACK64EXEC (disabled)
Feb  2 04:00:42 np0005604790 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb  2 04:00:42 np0005604790 kernel: evm: security.SMACK64MMAP (disabled)
Feb  2 04:00:42 np0005604790 kernel: evm: security.apparmor (disabled)
Feb  2 04:00:42 np0005604790 kernel: evm: security.ima
Feb  2 04:00:42 np0005604790 kernel: evm: security.capability
Feb  2 04:00:42 np0005604790 kernel: evm: HMAC attrs: 0x1
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb  2 04:00:42 np0005604790 kernel: Running certificate verification RSA selftest
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb  2 04:00:42 np0005604790 kernel: Running certificate verification ECDSA selftest
Feb  2 04:00:42 np0005604790 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb  2 04:00:42 np0005604790 kernel: clk: Disabling unused clocks
Feb  2 04:00:42 np0005604790 kernel: Freeing unused decrypted memory: 2028K
Feb  2 04:00:42 np0005604790 kernel: Freeing unused kernel image (initmem) memory: 4196K
Feb  2 04:00:42 np0005604790 kernel: Write protecting the kernel read-only data: 30720k
Feb  2 04:00:42 np0005604790 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Feb  2 04:00:42 np0005604790 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb  2 04:00:42 np0005604790 kernel: Run /init as init process
Feb  2 04:00:42 np0005604790 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 04:00:42 np0005604790 systemd: Detected virtualization kvm.
Feb  2 04:00:42 np0005604790 systemd: Detected architecture x86-64.
Feb  2 04:00:42 np0005604790 systemd: Running in initrd.
Feb  2 04:00:42 np0005604790 systemd: No hostname configured, using default hostname.
Feb  2 04:00:42 np0005604790 systemd: Hostname set to <localhost>.
Feb  2 04:00:42 np0005604790 systemd: Initializing machine ID from VM UUID.
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: Product: QEMU USB Tablet
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: Manufacturer: QEMU
Feb  2 04:00:42 np0005604790 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb  2 04:00:42 np0005604790 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb  2 04:00:42 np0005604790 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb  2 04:00:42 np0005604790 systemd: Queued start job for default target Initrd Default Target.
Feb  2 04:00:42 np0005604790 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 04:00:42 np0005604790 systemd: Reached target Local Encrypted Volumes.
Feb  2 04:00:42 np0005604790 systemd: Reached target Initrd /usr File System.
Feb  2 04:00:42 np0005604790 systemd: Reached target Local File Systems.
Feb  2 04:00:42 np0005604790 systemd: Reached target Path Units.
Feb  2 04:00:42 np0005604790 systemd: Reached target Slice Units.
Feb  2 04:00:42 np0005604790 systemd: Reached target Swaps.
Feb  2 04:00:42 np0005604790 systemd: Reached target Timer Units.
Feb  2 04:00:42 np0005604790 systemd: Listening on D-Bus System Message Bus Socket.
Feb  2 04:00:42 np0005604790 systemd: Listening on Journal Socket (/dev/log).
Feb  2 04:00:42 np0005604790 systemd: Listening on Journal Socket.
Feb  2 04:00:42 np0005604790 systemd: Listening on udev Control Socket.
Feb  2 04:00:42 np0005604790 systemd: Listening on udev Kernel Socket.
Feb  2 04:00:42 np0005604790 systemd: Reached target Socket Units.
Feb  2 04:00:42 np0005604790 systemd: Starting Create List of Static Device Nodes...
Feb  2 04:00:42 np0005604790 systemd: Starting Journal Service...
Feb  2 04:00:42 np0005604790 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 04:00:42 np0005604790 systemd: Starting Apply Kernel Variables...
Feb  2 04:00:42 np0005604790 systemd: Starting Create System Users...
Feb  2 04:00:42 np0005604790 systemd: Starting Setup Virtual Console...
Feb  2 04:00:42 np0005604790 systemd: Finished Create List of Static Device Nodes.
Feb  2 04:00:42 np0005604790 systemd: Finished Apply Kernel Variables.
Feb  2 04:00:42 np0005604790 systemd-journald[303]: Journal started
Feb  2 04:00:42 np0005604790 systemd-journald[303]: Runtime Journal (/run/log/journal/ef282098beab4a99a7134af58aea9f62) is 8.0M, max 153.6M, 145.6M free.
Feb  2 04:00:42 np0005604790 systemd-sysusers[307]: Creating group 'users' with GID 100.
Feb  2 04:00:42 np0005604790 systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Feb  2 04:00:42 np0005604790 systemd: Started Journal Service.
Feb  2 04:00:42 np0005604790 systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Create System Users.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 04:00:42 np0005604790 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Setup Virtual Console.
Feb  2 04:00:42 np0005604790 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting dracut cmdline hook...
Feb  2 04:00:42 np0005604790 dracut-cmdline[322]: dracut-9 dracut-057-102.git20250818.el9
Feb  2 04:00:42 np0005604790 dracut-cmdline[322]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb  2 04:00:42 np0005604790 systemd[1]: Finished dracut cmdline hook.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting dracut pre-udev hook...
Feb  2 04:00:42 np0005604790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  2 04:00:42 np0005604790 kernel: device-mapper: uevent: version 1.0.3
Feb  2 04:00:42 np0005604790 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb  2 04:00:42 np0005604790 kernel: RPC: Registered named UNIX socket transport module.
Feb  2 04:00:42 np0005604790 kernel: RPC: Registered udp transport module.
Feb  2 04:00:42 np0005604790 kernel: RPC: Registered tcp transport module.
Feb  2 04:00:42 np0005604790 kernel: RPC: Registered tcp-with-tls transport module.
Feb  2 04:00:42 np0005604790 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  2 04:00:42 np0005604790 rpc.statd[439]: Version 2.5.4 starting
Feb  2 04:00:42 np0005604790 rpc.statd[439]: Initializing NSM state
Feb  2 04:00:42 np0005604790 rpc.idmapd[444]: Setting log level to 0
Feb  2 04:00:42 np0005604790 systemd[1]: Finished dracut pre-udev hook.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 04:00:42 np0005604790 systemd-udevd[457]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 04:00:42 np0005604790 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting dracut pre-trigger hook...
Feb  2 04:00:42 np0005604790 systemd[1]: Finished dracut pre-trigger hook.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting Coldplug All udev Devices...
Feb  2 04:00:42 np0005604790 systemd[1]: Created slice Slice /system/modprobe.
Feb  2 04:00:42 np0005604790 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 04:00:42 np0005604790 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 04:00:42 np0005604790 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 04:00:42 np0005604790 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 04:00:42 np0005604790 systemd[1]: Reached target Network.
Feb  2 04:00:42 np0005604790 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb  2 04:00:42 np0005604790 systemd[1]: Starting dracut initqueue hook...
Feb  2 04:00:42 np0005604790 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb  2 04:00:42 np0005604790 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb  2 04:00:42 np0005604790 kernel: vda: vda1
Feb  2 04:00:42 np0005604790 kernel: scsi host0: ata_piix
Feb  2 04:00:42 np0005604790 kernel: scsi host1: ata_piix
Feb  2 04:00:42 np0005604790 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb  2 04:00:42 np0005604790 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb  2 04:00:42 np0005604790 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 04:00:42 np0005604790 systemd[1]: Reached target Initrd Root Device.
Feb  2 04:00:42 np0005604790 kernel: ata1: found unknown device (class 0)
Feb  2 04:00:42 np0005604790 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  2 04:00:42 np0005604790 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  2 04:00:43 np0005604790 systemd-udevd[493]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:00:43 np0005604790 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb  2 04:00:43 np0005604790 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  2 04:00:43 np0005604790 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  2 04:00:43 np0005604790 systemd[1]: Mounting Kernel Configuration File System...
Feb  2 04:00:43 np0005604790 systemd[1]: Mounted Kernel Configuration File System.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target System Initialization.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Basic System.
Feb  2 04:00:43 np0005604790 systemd[1]: Finished dracut initqueue hook.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Remote Encrypted Volumes.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Remote File Systems.
Feb  2 04:00:43 np0005604790 systemd[1]: Starting dracut pre-mount hook...
Feb  2 04:00:43 np0005604790 systemd[1]: Finished dracut pre-mount hook.
Feb  2 04:00:43 np0005604790 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Feb  2 04:00:43 np0005604790 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Feb  2 04:00:43 np0005604790 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Feb  2 04:00:43 np0005604790 systemd[1]: Mounting /sysroot...
Feb  2 04:00:43 np0005604790 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb  2 04:00:43 np0005604790 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Feb  2 04:00:43 np0005604790 kernel: XFS (vda1): Ending clean mount
Feb  2 04:00:43 np0005604790 systemd[1]: Mounted /sysroot.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Initrd Root File System.
Feb  2 04:00:43 np0005604790 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb  2 04:00:43 np0005604790 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  2 04:00:43 np0005604790 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Initrd File Systems.
Feb  2 04:00:43 np0005604790 systemd[1]: Reached target Initrd Default Target.
Feb  2 04:00:43 np0005604790 systemd[1]: Starting dracut mount hook...
Feb  2 04:00:43 np0005604790 systemd[1]: Finished dracut mount hook.
Feb  2 04:00:43 np0005604790 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb  2 04:00:43 np0005604790 rpc.idmapd[444]: exiting on signal 15
Feb  2 04:00:43 np0005604790 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb  2 04:00:44 np0005604790 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Network.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Timer Units.
Feb  2 04:00:44 np0005604790 systemd[1]: dbus.socket: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Initrd Default Target.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Basic System.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Initrd Root Device.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Initrd /usr File System.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Path Units.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Remote File Systems.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Slice Units.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Socket Units.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target System Initialization.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Local File Systems.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Swaps.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut mount hook.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut pre-mount hook.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped target Local Encrypted Volumes.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut initqueue hook.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Create Volatile Files and Directories.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Coldplug All udev Devices.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut pre-trigger hook.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Setup Virtual Console.
Feb  2 04:00:44 np0005604790 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Closed udev Control Socket.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Closed udev Kernel Socket.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut pre-udev hook.
Feb  2 04:00:44 np0005604790 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped dracut cmdline hook.
Feb  2 04:00:44 np0005604790 systemd[1]: Starting Cleanup udev Database...
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb  2 04:00:44 np0005604790 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Create List of Static Device Nodes.
Feb  2 04:00:44 np0005604790 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Stopped Create System Users.
Feb  2 04:00:44 np0005604790 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  2 04:00:44 np0005604790 systemd[1]: Finished Cleanup udev Database.
Feb  2 04:00:44 np0005604790 systemd[1]: Reached target Switch Root.
Feb  2 04:00:44 np0005604790 systemd[1]: Starting Switch Root...
Feb  2 04:00:44 np0005604790 systemd[1]: Switching root.
Feb  2 04:00:44 np0005604790 systemd-journald[303]: Journal stopped
Feb  2 04:00:45 np0005604790 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb  2 04:00:45 np0005604790 kernel: audit: type=1404 audit(1770022844.370:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:00:45 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:00:45 np0005604790 kernel: audit: type=1403 audit(1770022844.494:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  2 04:00:45 np0005604790 systemd: Successfully loaded SELinux policy in 127.414ms.
Feb  2 04:00:45 np0005604790 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.338ms.
Feb  2 04:00:45 np0005604790 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  2 04:00:45 np0005604790 systemd: Detected virtualization kvm.
Feb  2 04:00:45 np0005604790 systemd: Detected architecture x86-64.
Feb  2 04:00:45 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:00:45 np0005604790 systemd: initrd-switch-root.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd: Stopped Switch Root.
Feb  2 04:00:45 np0005604790 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  2 04:00:45 np0005604790 systemd: Created slice Slice /system/getty.
Feb  2 04:00:45 np0005604790 systemd: Created slice Slice /system/serial-getty.
Feb  2 04:00:45 np0005604790 systemd: Created slice Slice /system/sshd-keygen.
Feb  2 04:00:45 np0005604790 systemd: Created slice User and Session Slice.
Feb  2 04:00:45 np0005604790 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb  2 04:00:45 np0005604790 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb  2 04:00:45 np0005604790 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb  2 04:00:45 np0005604790 systemd: Reached target Local Encrypted Volumes.
Feb  2 04:00:45 np0005604790 systemd: Stopped target Switch Root.
Feb  2 04:00:45 np0005604790 systemd: Stopped target Initrd File Systems.
Feb  2 04:00:45 np0005604790 systemd: Stopped target Initrd Root File System.
Feb  2 04:00:45 np0005604790 systemd: Reached target Local Integrity Protected Volumes.
Feb  2 04:00:45 np0005604790 systemd: Reached target Path Units.
Feb  2 04:00:45 np0005604790 systemd: Reached target rpc_pipefs.target.
Feb  2 04:00:45 np0005604790 systemd: Reached target Slice Units.
Feb  2 04:00:45 np0005604790 systemd: Reached target Swaps.
Feb  2 04:00:45 np0005604790 systemd: Reached target Local Verity Protected Volumes.
Feb  2 04:00:45 np0005604790 systemd: Listening on RPCbind Server Activation Socket.
Feb  2 04:00:45 np0005604790 systemd: Reached target RPC Port Mapper.
Feb  2 04:00:45 np0005604790 systemd: Listening on Process Core Dump Socket.
Feb  2 04:00:45 np0005604790 systemd: Listening on initctl Compatibility Named Pipe.
Feb  2 04:00:45 np0005604790 systemd: Listening on udev Control Socket.
Feb  2 04:00:45 np0005604790 systemd: Listening on udev Kernel Socket.
Feb  2 04:00:45 np0005604790 systemd: Mounting Huge Pages File System...
Feb  2 04:00:45 np0005604790 systemd: Mounting POSIX Message Queue File System...
Feb  2 04:00:45 np0005604790 systemd: Mounting Kernel Debug File System...
Feb  2 04:00:45 np0005604790 systemd: Mounting Kernel Trace File System...
Feb  2 04:00:45 np0005604790 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 04:00:45 np0005604790 systemd: Starting Create List of Static Device Nodes...
Feb  2 04:00:45 np0005604790 systemd: Starting Load Kernel Module configfs...
Feb  2 04:00:45 np0005604790 systemd: Starting Load Kernel Module drm...
Feb  2 04:00:45 np0005604790 systemd: Starting Load Kernel Module efi_pstore...
Feb  2 04:00:45 np0005604790 systemd: Starting Load Kernel Module fuse...
Feb  2 04:00:45 np0005604790 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb  2 04:00:45 np0005604790 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd: Stopped File System Check on Root Device.
Feb  2 04:00:45 np0005604790 systemd: Stopped Journal Service.
Feb  2 04:00:45 np0005604790 systemd: Starting Journal Service...
Feb  2 04:00:45 np0005604790 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb  2 04:00:45 np0005604790 systemd: Starting Generate network units from Kernel command line...
Feb  2 04:00:45 np0005604790 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 04:00:45 np0005604790 systemd: Starting Remount Root and Kernel File Systems...
Feb  2 04:00:45 np0005604790 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb  2 04:00:45 np0005604790 systemd: Starting Apply Kernel Variables...
Feb  2 04:00:45 np0005604790 systemd: Starting Coldplug All udev Devices...
Feb  2 04:00:45 np0005604790 kernel: fuse: init (API version 7.37)
Feb  2 04:00:45 np0005604790 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb  2 04:00:45 np0005604790 systemd: Mounted Huge Pages File System.
Feb  2 04:00:45 np0005604790 systemd: Mounted POSIX Message Queue File System.
Feb  2 04:00:45 np0005604790 systemd: Mounted Kernel Debug File System.
Feb  2 04:00:45 np0005604790 systemd: Mounted Kernel Trace File System.
Feb  2 04:00:45 np0005604790 systemd: Finished Create List of Static Device Nodes.
Feb  2 04:00:45 np0005604790 systemd-journald[677]: Journal started
Feb  2 04:00:45 np0005604790 systemd-journald[677]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 04:00:45 np0005604790 systemd[1]: Queued start job for default target Multi-User System.
Feb  2 04:00:45 np0005604790 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd: Started Journal Service.
Feb  2 04:00:45 np0005604790 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 04:00:45 np0005604790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb  2 04:00:45 np0005604790 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load Kernel Module fuse.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb  2 04:00:45 np0005604790 kernel: ACPI: bus type drm_connector registered
Feb  2 04:00:45 np0005604790 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load Kernel Module drm.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Generate network units from Kernel command line.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb  2 04:00:45 np0005604790 systemd[1]: Mounting FUSE Control File System...
Feb  2 04:00:45 np0005604790 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Rebuild Hardware Database...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb  2 04:00:45 np0005604790 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Load/Save OS Random Seed...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Create System Users...
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Apply Kernel Variables.
Feb  2 04:00:45 np0005604790 systemd[1]: Mounted FUSE Control File System.
Feb  2 04:00:45 np0005604790 systemd-journald[677]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Feb  2 04:00:45 np0005604790 systemd-journald[677]: Received client request to flush runtime journal.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load/Save OS Random Seed.
Feb  2 04:00:45 np0005604790 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Create System Users.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Coldplug All udev Devices.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target Preparation for Local File Systems.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target Local File Systems.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb  2 04:00:45 np0005604790 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb  2 04:00:45 np0005604790 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb  2 04:00:45 np0005604790 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Automatic Boot Loader Update...
Feb  2 04:00:45 np0005604790 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Create Volatile Files and Directories...
Feb  2 04:00:45 np0005604790 bootctl[695]: Couldn't find EFI system partition, skipping.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Automatic Boot Loader Update.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Create Volatile Files and Directories.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Security Auditing Service...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting RPC Bind...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Rebuild Journal Catalog...
Feb  2 04:00:45 np0005604790 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb  2 04:00:45 np0005604790 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Rebuild Journal Catalog.
Feb  2 04:00:45 np0005604790 systemd[1]: Started RPC Bind.
Feb  2 04:00:45 np0005604790 augenrules[706]: /sbin/augenrules: No change
Feb  2 04:00:45 np0005604790 augenrules[721]: No rules
Feb  2 04:00:45 np0005604790 augenrules[721]: enabled 1
Feb  2 04:00:45 np0005604790 augenrules[721]: failure 1
Feb  2 04:00:45 np0005604790 augenrules[721]: pid 701
Feb  2 04:00:45 np0005604790 augenrules[721]: rate_limit 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_limit 8192
Feb  2 04:00:45 np0005604790 augenrules[721]: lost 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog 3
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time 60000
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time_actual 0
Feb  2 04:00:45 np0005604790 augenrules[721]: enabled 1
Feb  2 04:00:45 np0005604790 augenrules[721]: failure 1
Feb  2 04:00:45 np0005604790 augenrules[721]: pid 701
Feb  2 04:00:45 np0005604790 augenrules[721]: rate_limit 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_limit 8192
Feb  2 04:00:45 np0005604790 augenrules[721]: lost 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog 4
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time 60000
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time_actual 0
Feb  2 04:00:45 np0005604790 augenrules[721]: enabled 1
Feb  2 04:00:45 np0005604790 augenrules[721]: failure 1
Feb  2 04:00:45 np0005604790 augenrules[721]: pid 701
Feb  2 04:00:45 np0005604790 augenrules[721]: rate_limit 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_limit 8192
Feb  2 04:00:45 np0005604790 augenrules[721]: lost 0
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog 3
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time 60000
Feb  2 04:00:45 np0005604790 augenrules[721]: backlog_wait_time_actual 0
Feb  2 04:00:45 np0005604790 systemd[1]: Started Security Auditing Service.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Rebuild Hardware Database.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb  2 04:00:45 np0005604790 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Feb  2 04:00:45 np0005604790 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Load Kernel Module configfs...
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Update is Completed...
Feb  2 04:00:45 np0005604790 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb  2 04:00:45 np0005604790 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Load Kernel Module configfs.
Feb  2 04:00:45 np0005604790 systemd[1]: Finished Update is Completed.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target System Initialization.
Feb  2 04:00:45 np0005604790 systemd[1]: Started dnf makecache --timer.
Feb  2 04:00:45 np0005604790 systemd[1]: Started Daily rotation of log files.
Feb  2 04:00:45 np0005604790 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target Timer Units.
Feb  2 04:00:45 np0005604790 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb  2 04:00:45 np0005604790 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:00:45 np0005604790 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target Socket Units.
Feb  2 04:00:45 np0005604790 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb  2 04:00:45 np0005604790 systemd[1]: Starting D-Bus System Message Bus...
Feb  2 04:00:45 np0005604790 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 04:00:45 np0005604790 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  2 04:00:45 np0005604790 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb  2 04:00:45 np0005604790 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb  2 04:00:45 np0005604790 systemd[1]: Started D-Bus System Message Bus.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target Basic System.
Feb  2 04:00:45 np0005604790 dbus-broker-lau[772]: Ready
Feb  2 04:00:45 np0005604790 systemd[1]: Starting NTP client/server...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb  2 04:00:45 np0005604790 systemd[1]: Starting IPv4 firewall with iptables...
Feb  2 04:00:45 np0005604790 systemd[1]: Started irqbalance daemon.
Feb  2 04:00:45 np0005604790 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb  2 04:00:45 np0005604790 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:00:45 np0005604790 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:00:45 np0005604790 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target sshd-keygen.target.
Feb  2 04:00:45 np0005604790 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb  2 04:00:45 np0005604790 systemd[1]: Reached target User and Group Name Lookups.
Feb  2 04:00:45 np0005604790 systemd[1]: Starting User Login Management...
Feb  2 04:00:45 np0005604790 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb  2 04:00:45 np0005604790 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb  2 04:00:46 np0005604790 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb  2 04:00:46 np0005604790 kernel: Console: switching to colour dummy device 80x25
Feb  2 04:00:46 np0005604790 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb  2 04:00:46 np0005604790 kernel: [drm] features: -context_init
Feb  2 04:00:46 np0005604790 kernel: [drm] number of scanouts: 1
Feb  2 04:00:46 np0005604790 kernel: [drm] number of cap sets: 0
Feb  2 04:00:46 np0005604790 chronyd[805]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 04:00:46 np0005604790 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb  2 04:00:46 np0005604790 chronyd[805]: Loaded 0 symmetric keys
Feb  2 04:00:46 np0005604790 chronyd[805]: Using right/UTC timezone to obtain leap second data
Feb  2 04:00:46 np0005604790 chronyd[805]: Loaded seccomp filter (level 2)
Feb  2 04:00:46 np0005604790 systemd[1]: Started NTP client/server.
Feb  2 04:00:46 np0005604790 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb  2 04:00:46 np0005604790 kernel: Console: switching to colour frame buffer device 128x48
Feb  2 04:00:46 np0005604790 systemd-logind[793]: New seat seat0.
Feb  2 04:00:46 np0005604790 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 04:00:46 np0005604790 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 04:00:46 np0005604790 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb  2 04:00:46 np0005604790 systemd[1]: Started User Login Management.
Feb  2 04:00:46 np0005604790 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb  2 04:00:46 np0005604790 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb  2 04:00:46 np0005604790 kernel: kvm_amd: TSC scaling supported
Feb  2 04:00:46 np0005604790 kernel: kvm_amd: Nested Virtualization enabled
Feb  2 04:00:46 np0005604790 kernel: kvm_amd: Nested Paging enabled
Feb  2 04:00:46 np0005604790 kernel: kvm_amd: LBR virtualization supported
Feb  2 04:00:46 np0005604790 iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Feb  2 04:00:46 np0005604790 systemd[1]: Finished IPv4 firewall with iptables.
Feb  2 04:00:46 np0005604790 cloud-init[839]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 02 Feb 2026 09:00:46 +0000. Up 6.02 seconds.
Feb  2 04:00:46 np0005604790 systemd[1]: run-cloud\x2dinit-tmp-tmpnpoou621.mount: Deactivated successfully.
Feb  2 04:00:46 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 04:00:47 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 04:00:47 np0005604790 systemd-hostnamed[853]: Hostname set to <np0005604790.novalocal> (static)
Feb  2 04:00:47 np0005604790 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb  2 04:00:47 np0005604790 systemd[1]: Reached target Preparation for Network.
Feb  2 04:00:47 np0005604790 systemd[1]: Starting Network Manager...
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3160] NetworkManager (version 1.54.3-2.el9) is starting... (boot:1dacd1c7-39aa-475f-946e-ec901b3ee402)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3164] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3440] manager[0x5569f0e0b000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3520] hostname: hostname: using hostnamed
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3522] hostname: static hostname changed from (none) to "np0005604790.novalocal"
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3529] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3636] manager[0x5569f0e0b000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3636] manager[0x5569f0e0b000]: rfkill: WWAN hardware radio set enabled
Feb  2 04:00:47 np0005604790 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3834] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3836] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3837] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3837] manager: Networking is enabled by state file
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3840] settings: Loaded settings plugin: keyfile (internal)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.3936] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4010] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 04:00:47 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4029] dhcp: init: Using DHCP client 'internal'
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4035] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4047] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4076] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4091] device (lo): Activation: starting connection 'lo' (3fd945dd-23ac-4177-bd37-9d87c9c02d55)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4099] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4102] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4125] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 04:00:47 np0005604790 systemd[1]: Started Network Manager.
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4135] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4138] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4141] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4143] device (eth0): carrier: link connected
Feb  2 04:00:47 np0005604790 systemd[1]: Reached target Network.
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4156] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4163] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4169] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4172] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4173] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4176] manager: NetworkManager state is now CONNECTING
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4178] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:00:47 np0005604790 systemd[1]: Starting Network Manager Wait Online...
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4183] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4186] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:00:47 np0005604790 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb  2 04:00:47 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4372] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4375] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 04:00:47 np0005604790 NetworkManager[857]: <info>  [1770022847.4380] device (lo): Activation: successful, device activated.
Feb  2 04:00:47 np0005604790 systemd[1]: Started GSSAPI Proxy Daemon.
Feb  2 04:00:47 np0005604790 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb  2 04:00:47 np0005604790 systemd[1]: Reached target NFS client services.
Feb  2 04:00:47 np0005604790 systemd[1]: Reached target Preparation for Remote File Systems.
Feb  2 04:00:47 np0005604790 systemd[1]: Reached target Remote File Systems.
Feb  2 04:00:47 np0005604790 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6170] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6188] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6223] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6247] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6251] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6259] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6266] device (eth0): Activation: successful, device activated.
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6275] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 04:00:48 np0005604790 NetworkManager[857]: <info>  [1770022848.6280] manager: startup complete
Feb  2 04:00:48 np0005604790 systemd[1]: Finished Network Manager Wait Online.
Feb  2 04:00:48 np0005604790 systemd[1]: Starting Cloud-init: Network Stage...
Feb  2 04:00:48 np0005604790 cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 02 Feb 2026 09:00:48 +0000. Up 8.27 seconds.
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.144         | 255.255.255.0 | global | fa:16:3e:25:cf:58 |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe25:cf58/64 |       .       |  link  | fa:16:3e:25:cf:58 |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb  2 04:00:48 np0005604790 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb  2 04:00:49 np0005604790 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Feb  2 04:00:50 np0005604790 cloud-init[921]: Generating public/private rsa key pair.
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key fingerprint is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: SHA256:pe0wca6M1MCvTd5biJ5mLMvoQbSfvZRYmJgb3DwJ8XM root@np0005604790.novalocal
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key's randomart image is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: +---[RSA 3072]----+
Feb  2 04:00:50 np0005604790 cloud-init[921]: |    .            |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |     +           |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |    o = E o      |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |   o B X B       |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |    B O S o      |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |   . = & O .     |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |    o *.X + .    |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |     +.o+o o     |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |   .o o=+ .      |
Feb  2 04:00:50 np0005604790 cloud-init[921]: +----[SHA256]-----+
Feb  2 04:00:50 np0005604790 cloud-init[921]: Generating public/private ecdsa key pair.
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key fingerprint is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: SHA256:omSMrbvUX6DkbYiDUcfIzqg62PqASGjkHmAkA+Igeug root@np0005604790.novalocal
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key's randomart image is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: +---[ECDSA 256]---+
Feb  2 04:00:50 np0005604790 cloud-init[921]: |O.               |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |O+ o             |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |++= o            |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |B* =             |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |+E+ * o S        |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |Bo.O = o         |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |Bo= * o .        |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |+o.o o .         |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |o++.  .          |
Feb  2 04:00:50 np0005604790 cloud-init[921]: +----[SHA256]-----+
Feb  2 04:00:50 np0005604790 cloud-init[921]: Generating public/private ed25519 key pair.
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb  2 04:00:50 np0005604790 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key fingerprint is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: SHA256:cCMLXnuNHwCwifyjAP5qcskKixhRLAS6nPFCIQA0Iug root@np0005604790.novalocal
Feb  2 04:00:50 np0005604790 cloud-init[921]: The key's randomart image is:
Feb  2 04:00:50 np0005604790 cloud-init[921]: +--[ED25519 256]--+
Feb  2 04:00:50 np0005604790 cloud-init[921]: |&=  ...          |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |Boo. o .         |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |=o= + + +        |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |=E+o o * =       |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |+= .+ o S o      |
Feb  2 04:00:50 np0005604790 cloud-init[921]: | oo. . . . .     |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |o..o      .      |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |=+=              |
Feb  2 04:00:50 np0005604790 cloud-init[921]: |B+               |
Feb  2 04:00:50 np0005604790 cloud-init[921]: +----[SHA256]-----+
Feb  2 04:00:50 np0005604790 systemd[1]: Finished Cloud-init: Network Stage.
Feb  2 04:00:50 np0005604790 systemd[1]: Reached target Cloud-config availability.
Feb  2 04:00:50 np0005604790 systemd[1]: Reached target Network is Online.
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Cloud-init: Config Stage...
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Crash recovery kernel arming...
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Notify NFS peers of a restart...
Feb  2 04:00:50 np0005604790 systemd[1]: Starting System Logging Service...
Feb  2 04:00:50 np0005604790 sm-notify[1004]: Version 2.5.4 starting
Feb  2 04:00:50 np0005604790 systemd[1]: Starting OpenSSH server daemon...
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Permit User Sessions...
Feb  2 04:00:50 np0005604790 systemd[1]: Started Notify NFS peers of a restart.
Feb  2 04:00:50 np0005604790 systemd[1]: Started OpenSSH server daemon.
Feb  2 04:00:50 np0005604790 systemd[1]: Finished Permit User Sessions.
Feb  2 04:00:50 np0005604790 systemd[1]: Started Command Scheduler.
Feb  2 04:00:50 np0005604790 systemd[1]: Started Getty on tty1.
Feb  2 04:00:50 np0005604790 systemd[1]: Started Serial Getty on ttyS0.
Feb  2 04:00:50 np0005604790 systemd[1]: Reached target Login Prompts.
Feb  2 04:00:50 np0005604790 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Feb  2 04:00:50 np0005604790 systemd[1]: Started System Logging Service.
Feb  2 04:00:50 np0005604790 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb  2 04:00:50 np0005604790 systemd[1]: Reached target Multi-User System.
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb  2 04:00:50 np0005604790 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  2 04:00:50 np0005604790 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb  2 04:00:50 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:00:50 np0005604790 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Feb  2 04:00:50 np0005604790 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Feb  2 04:00:50 np0005604790 cloud-init[1132]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 02 Feb 2026 09:00:50 +0000. Up 10.07 seconds.
Feb  2 04:00:50 np0005604790 systemd[1]: Finished Cloud-init: Config Stage.
Feb  2 04:00:50 np0005604790 systemd[1]: Starting Cloud-init: Final Stage...
Feb  2 04:00:50 np0005604790 dracut[1265]: dracut-057-102.git20250818.el9
Feb  2 04:00:51 np0005604790 dracut[1267]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Feb  2 04:00:51 np0005604790 cloud-init[1342]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 02 Feb 2026 09:00:51 +0000. Up 10.46 seconds.
Feb  2 04:00:51 np0005604790 cloud-init[1351]: #############################################################
Feb  2 04:00:51 np0005604790 cloud-init[1352]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb  2 04:00:51 np0005604790 cloud-init[1355]: 256 SHA256:omSMrbvUX6DkbYiDUcfIzqg62PqASGjkHmAkA+Igeug root@np0005604790.novalocal (ECDSA)
Feb  2 04:00:51 np0005604790 cloud-init[1359]: 256 SHA256:cCMLXnuNHwCwifyjAP5qcskKixhRLAS6nPFCIQA0Iug root@np0005604790.novalocal (ED25519)
Feb  2 04:00:51 np0005604790 cloud-init[1365]: 3072 SHA256:pe0wca6M1MCvTd5biJ5mLMvoQbSfvZRYmJgb3DwJ8XM root@np0005604790.novalocal (RSA)
Feb  2 04:00:51 np0005604790 cloud-init[1366]: -----END SSH HOST KEY FINGERPRINTS-----
Feb  2 04:00:51 np0005604790 cloud-init[1367]: #############################################################
Feb  2 04:00:51 np0005604790 cloud-init[1342]: Cloud-init v. 24.4-8.el9 finished at Mon, 02 Feb 2026 09:00:51 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.63 seconds
Feb  2 04:00:51 np0005604790 systemd[1]: Finished Cloud-init: Final Stage.
Feb  2 04:00:51 np0005604790 systemd[1]: Reached target Cloud-init target.
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 04:00:51 np0005604790 dracut[1267]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: memstrack is not available
Feb  2 04:00:52 np0005604790 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb  2 04:00:52 np0005604790 dracut[1267]: memstrack is not available
Feb  2 04:00:52 np0005604790 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb  2 04:00:52 np0005604790 dracut[1267]: *** Including module: systemd ***
Feb  2 04:00:52 np0005604790 dracut[1267]: *** Including module: fips ***
Feb  2 04:00:52 np0005604790 dracut[1267]: *** Including module: systemd-initrd ***
Feb  2 04:00:52 np0005604790 dracut[1267]: *** Including module: i18n ***
Feb  2 04:00:52 np0005604790 dracut[1267]: *** Including module: drm ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: prefixdevname ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: kernel-modules ***
Feb  2 04:00:53 np0005604790 kernel: block vda: the capability attribute has been deprecated.
Feb  2 04:00:53 np0005604790 chronyd[805]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Feb  2 04:00:53 np0005604790 chronyd[805]: System clock TAI offset set to 37 seconds
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: kernel-modules-extra ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: qemu ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: fstab-sys ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: rootfs-block ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: terminfo ***
Feb  2 04:00:53 np0005604790 dracut[1267]: *** Including module: udev-rules ***
Feb  2 04:00:54 np0005604790 dracut[1267]: Skipping udev rule: 91-permissions.rules
Feb  2 04:00:54 np0005604790 dracut[1267]: Skipping udev rule: 80-drivers-modprobe.rules
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: virtiofs ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: dracut-systemd ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: usrmount ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: base ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: fs-lib ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: kdumpbase ***
Feb  2 04:00:54 np0005604790 dracut[1267]: *** Including module: microcode_ctl-fw_dir_override ***
Feb  2 04:00:54 np0005604790 dracut[1267]:  microcode_ctl module: mangling fw_dir
Feb  2 04:00:54 np0005604790 dracut[1267]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb  2 04:00:55 np0005604790 dracut[1267]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb  2 04:00:55 np0005604790 dracut[1267]: *** Including module: openssl ***
Feb  2 04:00:55 np0005604790 dracut[1267]: *** Including module: shutdown ***
Feb  2 04:00:55 np0005604790 dracut[1267]: *** Including module: squash ***
Feb  2 04:00:55 np0005604790 dracut[1267]: *** Including modules done ***
Feb  2 04:00:55 np0005604790 dracut[1267]: *** Installing kernel module dependencies ***
Feb  2 04:00:56 np0005604790 dracut[1267]: *** Installing kernel module dependencies done ***
Feb  2 04:00:56 np0005604790 dracut[1267]: *** Resolving executable dependencies ***
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 25 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 25 affinity is now unmanaged
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 31 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 31 affinity is now unmanaged
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 28 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 28 affinity is now unmanaged
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 32 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 32 affinity is now unmanaged
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 30 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 30 affinity is now unmanaged
Feb  2 04:00:56 np0005604790 irqbalance[788]: Cannot change IRQ 29 affinity: Operation not permitted
Feb  2 04:00:56 np0005604790 irqbalance[788]: IRQ 29 affinity is now unmanaged
Feb  2 04:00:57 np0005604790 dracut[1267]: *** Resolving executable dependencies done ***
Feb  2 04:00:57 np0005604790 dracut[1267]: *** Generating early-microcode cpio image ***
Feb  2 04:00:57 np0005604790 dracut[1267]: *** Store current command line parameters ***
Feb  2 04:00:57 np0005604790 dracut[1267]: Stored kernel commandline:
Feb  2 04:00:57 np0005604790 dracut[1267]: No dracut internal kernel commandline stored in the initramfs
Feb  2 04:00:57 np0005604790 dracut[1267]: *** Install squash loader ***
Feb  2 04:00:58 np0005604790 dracut[1267]: *** Squashing the files inside the initramfs ***
Feb  2 04:00:58 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:00:59 np0005604790 dracut[1267]: *** Squashing the files inside the initramfs done ***
Feb  2 04:00:59 np0005604790 dracut[1267]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Feb  2 04:00:59 np0005604790 dracut[1267]: *** Hardlinking files ***
Feb  2 04:00:59 np0005604790 dracut[1267]: *** Hardlinking files done ***
Feb  2 04:00:59 np0005604790 dracut[1267]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Feb  2 04:01:00 np0005604790 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Feb  2 04:01:00 np0005604790 kdumpctl[1018]: kdump: Starting kdump: [OK]
Feb  2 04:01:00 np0005604790 systemd[1]: Finished Crash recovery kernel arming.
Feb  2 04:01:00 np0005604790 systemd[1]: Startup finished in 1.195s (kernel) + 2.504s (initrd) + 15.969s (userspace) = 19.669s.
Feb  2 04:01:06 np0005604790 irqbalance[788]: Cannot change IRQ 26 affinity: Operation not permitted
Feb  2 04:01:06 np0005604790 irqbalance[788]: IRQ 26 affinity is now unmanaged
Feb  2 04:01:14 np0005604790 systemd[1]: Created slice User Slice of UID 1000.
Feb  2 04:01:14 np0005604790 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb  2 04:01:14 np0005604790 systemd-logind[793]: New session 1 of user zuul.
Feb  2 04:01:14 np0005604790 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb  2 04:01:14 np0005604790 systemd[1]: Starting User Manager for UID 1000...
Feb  2 04:01:14 np0005604790 systemd[4320]: Queued start job for default target Main User Target.
Feb  2 04:01:14 np0005604790 systemd[4320]: Created slice User Application Slice.
Feb  2 04:01:14 np0005604790 systemd[4320]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:01:14 np0005604790 systemd[4320]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 04:01:14 np0005604790 systemd[4320]: Reached target Paths.
Feb  2 04:01:14 np0005604790 systemd[4320]: Reached target Timers.
Feb  2 04:01:14 np0005604790 systemd[4320]: Starting D-Bus User Message Bus Socket...
Feb  2 04:01:14 np0005604790 systemd[4320]: Starting Create User's Volatile Files and Directories...
Feb  2 04:01:14 np0005604790 systemd[4320]: Listening on D-Bus User Message Bus Socket.
Feb  2 04:01:14 np0005604790 systemd[4320]: Reached target Sockets.
Feb  2 04:01:14 np0005604790 systemd[4320]: Finished Create User's Volatile Files and Directories.
Feb  2 04:01:14 np0005604790 systemd[4320]: Reached target Basic System.
Feb  2 04:01:14 np0005604790 systemd[4320]: Reached target Main User Target.
Feb  2 04:01:14 np0005604790 systemd[4320]: Startup finished in 169ms.
Feb  2 04:01:14 np0005604790 systemd[1]: Started User Manager for UID 1000.
Feb  2 04:01:14 np0005604790 systemd[1]: Started Session 1 of User zuul.
Feb  2 04:01:15 np0005604790 python3[4402]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:01:17 np0005604790 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 04:01:17 np0005604790 python3[4432]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:01:26 np0005604790 python3[4490]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:01:27 np0005604790 python3[4530]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb  2 04:01:29 np0005604790 python3[4556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDres8I0e2lx2XlkDi/o8mbn7A8kJLvscauEMeSccA/Q28EgVAaHKAaMzB7MTuExuZhV2hKdHCjChvbo+ZEItJb42XILxS2oD7nNZFvVgzBQniv52jPQzNymZKv6xSxlAe2fhEntL1UKK7rrlHSbTvpCdGBhDUQsTkZLTXEabEEU2AUKrMcF1w86Dag94m2LcmlUNBhMgEGG2gCAwR3LArhvliT36AiA+uCD9ZLWOYPkktaBOoVTE2SXaHLM/QcLtQ9fjx6HlaVH0Yhtj7rqVbzUqi90TmhLPQuW8eD8LtDzn9vdNraZXTqHagLV5n5OxOivwbk4MGal3/4FVMfbvwmkxfPWWHnq9CpCjdr2/8NZkLs7rZjZtRj+oszTemHh2fSvs0qv1+QN2N9Fo3lRt/o3COnsw0ktNu6Xln+nqj4Bt/yqB5VmDCXaqp2DHhGlCM3XpR2F7xlpNITVJVPl9bGLc9YHytFHIM9fCjt1aMlyP028PhHIHlcB7LcSSd5QM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:29 np0005604790 python3[4580]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:30 np0005604790 python3[4679]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:30 np0005604790 python3[4750]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770022889.8003938-251-121929532657346/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b133c3a79151467e8c6849ab0367df01_id_rsa follow=False checksum=97092328ac9cd34b53b1d81cf7562eb94a095d6b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:31 np0005604790 python3[4873]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:31 np0005604790 python3[4944]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770022890.7602355-306-56947430613619/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b133c3a79151467e8c6849ab0367df01_id_rsa.pub follow=False checksum=79261b1251eaaf0ed818421d3062a6de11fbecf0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:32 np0005604790 python3[4992]: ansible-ping Invoked with data=pong
Feb  2 04:01:33 np0005604790 python3[5016]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:01:36 np0005604790 python3[5074]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb  2 04:01:37 np0005604790 python3[5106]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:37 np0005604790 python3[5130]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:37 np0005604790 python3[5154]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:38 np0005604790 python3[5178]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:38 np0005604790 python3[5202]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:38 np0005604790 python3[5226]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:40 np0005604790 python3[5252]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:40 np0005604790 python3[5330]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:41 np0005604790 python3[5403]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1770022900.4856896-31-96878228293336/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:42 np0005604790 python3[5451]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:42 np0005604790 python3[5475]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:42 np0005604790 python3[5499]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:42 np0005604790 python3[5523]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:43 np0005604790 python3[5547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:43 np0005604790 python3[5571]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:43 np0005604790 python3[5595]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:43 np0005604790 python3[5619]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:44 np0005604790 python3[5643]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:44 np0005604790 python3[5667]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:44 np0005604790 python3[5691]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:44 np0005604790 python3[5715]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:45 np0005604790 python3[5739]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:45 np0005604790 python3[5763]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:45 np0005604790 python3[5787]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:46 np0005604790 python3[5811]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:46 np0005604790 python3[5835]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:46 np0005604790 python3[5859]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:46 np0005604790 python3[5883]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:47 np0005604790 python3[5907]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:47 np0005604790 python3[5931]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:47 np0005604790 python3[5955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:47 np0005604790 python3[5979]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:48 np0005604790 python3[6003]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:48 np0005604790 python3[6027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:48 np0005604790 python3[6051]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:01:51 np0005604790 python3[6077]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 04:01:51 np0005604790 systemd[1]: Starting Time & Date Service...
Feb  2 04:01:51 np0005604790 systemd[1]: Started Time & Date Service.
Feb  2 04:01:51 np0005604790 systemd-timedated[6079]: Changed time zone to 'UTC' (UTC).
Feb  2 04:01:51 np0005604790 python3[6108]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:52 np0005604790 python3[6184]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:52 np0005604790 python3[6255]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1770022912.0792124-251-239776593503907/source _original_basename=tmpyfdpascx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:53 np0005604790 python3[6355]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:53 np0005604790 python3[6426]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770022912.885225-301-187875038334587/source _original_basename=tmpez45k0ty follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:54 np0005604790 python3[6528]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:54 np0005604790 python3[6601]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1770022914.0156848-381-270031430176009/source _original_basename=tmp8zsnm8x8 follow=False checksum=d3787dbc1d919dd7098cc7939d07e9b9a9d1522d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:55 np0005604790 python3[6649]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:01:55 np0005604790 python3[6675]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:01:55 np0005604790 python3[6755]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:01:56 np0005604790 python3[6828]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1770022915.5784588-451-142827532641358/source _original_basename=tmpr923bqr2 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:01:56 np0005604790 python3[6879]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-385c-c5be-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:01:57 np0005604790 python3[6907]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-385c-c5be-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb  2 04:01:58 np0005604790 python3[6935]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:02:17 np0005604790 python3[6962]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:02:21 np0005604790 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb  2 04:02:57 np0005604790 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb  2 04:02:57 np0005604790 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7368] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 04:02:57 np0005604790 systemd-udevd[6965]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7719] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7762] settings: (eth1): created default wired connection 'Wired connection 1'
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7767] device (eth1): carrier: link connected
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7770] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7778] policy: auto-activating connection 'Wired connection 1' (7223f535-5f63-3095-bc33-c3417f1eebd4)
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7783] device (eth1): Activation: starting connection 'Wired connection 1' (7223f535-5f63-3095-bc33-c3417f1eebd4)
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7785] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7789] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7795] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:02:57 np0005604790 NetworkManager[857]: <info>  [1770022977.7800] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:02:58 np0005604790 python3[6992]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-26e4-4fa2-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:03:08 np0005604790 python3[7072]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:03:08 np0005604790 python3[7145]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770022988.1670833-104-194039166377788/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5f648ca94637025cdc122ee5c24b92611ec4e7e4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:03:09 np0005604790 python3[7195]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:03:09 np0005604790 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 04:03:09 np0005604790 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 04:03:09 np0005604790 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 04:03:09 np0005604790 systemd[1]: Stopping Network Manager...
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6541] caught SIGTERM, shutting down normally.
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6552] dhcp4 (eth0): canceled DHCP transaction
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6552] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6553] dhcp4 (eth0): state changed no lease
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6558] manager: NetworkManager state is now CONNECTING
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6625] dhcp4 (eth1): canceled DHCP transaction
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6626] dhcp4 (eth1): state changed no lease
Feb  2 04:03:09 np0005604790 NetworkManager[857]: <info>  [1770022989.6671] exiting (success)
Feb  2 04:03:09 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:03:09 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:03:09 np0005604790 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 04:03:09 np0005604790 systemd[1]: Stopped Network Manager.
Feb  2 04:03:09 np0005604790 systemd[1]: NetworkManager.service: Consumed 1.305s CPU time, 9.9M memory peak.
Feb  2 04:03:09 np0005604790 systemd[1]: Starting Network Manager...
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7129] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:1dacd1c7-39aa-475f-946e-ec901b3ee402)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7132] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7178] manager[0x561942547000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 04:03:09 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 04:03:09 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7877] hostname: hostname: using hostnamed
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7878] hostname: static hostname changed from (none) to "np0005604790.novalocal"
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7883] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7889] manager[0x561942547000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7890] manager[0x561942547000]: rfkill: WWAN hardware radio set enabled
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7931] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7932] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7933] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7933] manager: Networking is enabled by state file
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7938] settings: Loaded settings plugin: keyfile (internal)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7943] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.7987] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8002] dhcp: init: Using DHCP client 'internal'
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8007] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8016] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8025] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8040] device (lo): Activation: starting connection 'lo' (3fd945dd-23ac-4177-bd37-9d87c9c02d55)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8051] device (eth0): carrier: link connected
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8058] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8066] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8067] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8079] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8090] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8099] device (eth1): carrier: link connected
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8104] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8114] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (7223f535-5f63-3095-bc33-c3417f1eebd4) (indicated)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8114] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8124] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8137] device (eth1): Activation: starting connection 'Wired connection 1' (7223f535-5f63-3095-bc33-c3417f1eebd4)
Feb  2 04:03:09 np0005604790 systemd[1]: Started Network Manager.
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8146] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8158] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8160] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8162] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8164] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8167] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8171] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8174] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8178] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8185] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8189] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8207] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8210] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8226] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8231] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8237] device (lo): Activation: successful, device activated.
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8244] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8254] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 04:03:09 np0005604790 systemd[1]: Starting Network Manager Wait Online...
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8364] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8403] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8405] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8410] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8413] device (eth0): Activation: successful, device activated.
Feb  2 04:03:09 np0005604790 NetworkManager[7203]: <info>  [1770022989.8418] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 04:03:10 np0005604790 python3[7279]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-26e4-4fa2-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:03:19 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:03:39 np0005604790 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.6578] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 04:03:54 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:03:54 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7065] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7067] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7071] device (eth1): Activation: successful, device activated.
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7076] manager: startup complete
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7077] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <warn>  [1770023034.7080] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7085] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 systemd[1]: Finished Network Manager Wait Online.
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7175] dhcp4 (eth1): canceled DHCP transaction
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7178] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7178] dhcp4 (eth1): state changed no lease
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7190] policy: auto-activating connection 'ci-private-network' (75fa72a4-896a-5876-a9b3-438a144045af)
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7193] device (eth1): Activation: starting connection 'ci-private-network' (75fa72a4-896a-5876-a9b3-438a144045af)
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7194] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7196] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7201] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7207] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7330] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7332] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:03:54 np0005604790 NetworkManager[7203]: <info>  [1770023034.7337] device (eth1): Activation: successful, device activated.
Feb  2 04:04:04 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:04:05 np0005604790 systemd[4320]: Starting Mark boot as successful...
Feb  2 04:04:05 np0005604790 systemd[4320]: Finished Mark boot as successful.
Feb  2 04:04:10 np0005604790 systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Feb  2 04:05:21 np0005604790 systemd-logind[793]: New session 3 of user zuul.
Feb  2 04:05:21 np0005604790 systemd[1]: Started Session 3 of User zuul.
Feb  2 04:05:21 np0005604790 python3[7392]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:05:22 np0005604790 python3[7465]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770023121.4146893-373-38353859577945/source _original_basename=tmpqjs65g95 follow=False checksum=f14c371f1ecf34b9a35f6f9273fe37702180eaed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:05:25 np0005604790 systemd[1]: session-3.scope: Deactivated successfully.
Feb  2 04:05:25 np0005604790 systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Feb  2 04:05:25 np0005604790 systemd-logind[793]: Removed session 3.
Feb  2 04:07:05 np0005604790 systemd[4320]: Created slice User Background Tasks Slice.
Feb  2 04:07:05 np0005604790 systemd[4320]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 04:07:05 np0005604790 systemd[4320]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 04:13:15 np0005604790 systemd-logind[793]: New session 4 of user zuul.
Feb  2 04:13:15 np0005604790 systemd[1]: Started Session 4 of User zuul.
Feb  2 04:13:16 np0005604790 python3[7529]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-f989-50d9-00000000217d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:16 np0005604790 python3[7557]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:16 np0005604790 python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:16 np0005604790 python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:17 np0005604790 python3[7636]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:17 np0005604790 python3[7662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:18 np0005604790 python3[7740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:13:18 np0005604790 python3[7813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770023598.1261601-546-114325090553792/source _original_basename=tmpvcqhy79a follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:13:19 np0005604790 python3[7863]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:13:19 np0005604790 systemd[1]: Reloading.
Feb  2 04:13:19 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:13:21 np0005604790 python3[7919]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb  2 04:13:21 np0005604790 python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:22 np0005604790 python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:22 np0005604790 python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:22 np0005604790 python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:23 np0005604790 python3[8056]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-f989-50d9-000000002184-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:13:23 np0005604790 python3[8086]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:13:26 np0005604790 systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Feb  2 04:13:26 np0005604790 systemd[1]: session-4.scope: Deactivated successfully.
Feb  2 04:13:26 np0005604790 systemd[1]: session-4.scope: Consumed 4.097s CPU time.
Feb  2 04:13:26 np0005604790 systemd-logind[793]: Removed session 4.
Feb  2 04:13:28 np0005604790 systemd-logind[793]: New session 5 of user zuul.
Feb  2 04:13:28 np0005604790 systemd[1]: Started Session 5 of User zuul.
Feb  2 04:13:28 np0005604790 python3[8119]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 04:13:36 np0005604790 setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Feb  2 04:13:36 np0005604790 setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb  2 04:13:46 np0005604790 kernel: SELinux:  Converting 386 SID table entries...
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:13:46 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  Converting 389 SID table entries...
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:13:55 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:14:12 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 04:14:12 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:14:12 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:14:12 np0005604790 systemd[1]: Reloading.
Feb  2 04:14:12 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:14:12 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:14:16 np0005604790 python3[12062]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-cc31-c4d5-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:14:17 np0005604790 kernel: evm: overlay not supported
Feb  2 04:14:17 np0005604790 systemd[4320]: Starting D-Bus User Message Bus...
Feb  2 04:14:17 np0005604790 dbus-broker-launch[13129]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb  2 04:14:17 np0005604790 dbus-broker-launch[13129]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb  2 04:14:17 np0005604790 systemd[4320]: Started D-Bus User Message Bus.
Feb  2 04:14:17 np0005604790 dbus-broker-lau[13129]: Ready
Feb  2 04:14:17 np0005604790 systemd[4320]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb  2 04:14:17 np0005604790 systemd[4320]: Created slice Slice /user.
Feb  2 04:14:17 np0005604790 systemd[4320]: podman-13007.scope: unit configures an IP firewall, but not running as root.
Feb  2 04:14:17 np0005604790 systemd[4320]: (This warning is only shown for the first unit using IP firewalling.)
Feb  2 04:14:17 np0005604790 systemd[4320]: Started podman-13007.scope.
Feb  2 04:14:17 np0005604790 systemd[4320]: Started podman-pause-663ee478.scope.
Feb  2 04:14:18 np0005604790 systemd[1]: session-5.scope: Deactivated successfully.
Feb  2 04:14:18 np0005604790 systemd[1]: session-5.scope: Consumed 39.546s CPU time.
Feb  2 04:14:18 np0005604790 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Feb  2 04:14:18 np0005604790 systemd-logind[793]: Removed session 5.
Feb  2 04:14:26 np0005604790 irqbalance[788]: Cannot change IRQ 27 affinity: Operation not permitted
Feb  2 04:14:26 np0005604790 irqbalance[788]: IRQ 27 affinity is now unmanaged
Feb  2 04:14:38 np0005604790 systemd-logind[793]: New session 6 of user zuul.
Feb  2 04:14:38 np0005604790 systemd[1]: Started Session 6 of User zuul.
Feb  2 04:14:38 np0005604790 python3[22521]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAK6tX32HcxwxspxXPo2b5qp7NanSpxzQsxoSXNQ1fyRzMKWHr/dNDElPeQbQ0mmJ7TyKZaqVEp5TJcSLpUuKw= zuul@np0005604789.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:14:39 np0005604790 python3[22735]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAK6tX32HcxwxspxXPo2b5qp7NanSpxzQsxoSXNQ1fyRzMKWHr/dNDElPeQbQ0mmJ7TyKZaqVEp5TJcSLpUuKw= zuul@np0005604789.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:14:39 np0005604790 python3[23234]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005604790.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb  2 04:14:40 np0005604790 python3[23475]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBAK6tX32HcxwxspxXPo2b5qp7NanSpxzQsxoSXNQ1fyRzMKWHr/dNDElPeQbQ0mmJ7TyKZaqVEp5TJcSLpUuKw= zuul@np0005604789.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb  2 04:14:40 np0005604790 python3[23767]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:14:41 np0005604790 python3[23989]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770023680.4690452-150-67442140051465/source _original_basename=tmpehsx_gub follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:14:42 np0005604790 python3[24354]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb  2 04:14:42 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 04:14:42 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 04:14:42 np0005604790 systemd-hostnamed[24470]: Changed pretty hostname to 'compute-0'
Feb  2 04:14:42 np0005604790 systemd-hostnamed[24470]: Hostname set to <compute-0> (static)
Feb  2 04:14:42 np0005604790 NetworkManager[7203]: <info>  [1770023682.1777] hostname: static hostname changed from "np0005604790.novalocal" to "compute-0"
Feb  2 04:14:42 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:14:42 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:14:42 np0005604790 systemd[1]: session-6.scope: Deactivated successfully.
Feb  2 04:14:42 np0005604790 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Feb  2 04:14:42 np0005604790 systemd[1]: session-6.scope: Consumed 2.135s CPU time.
Feb  2 04:14:42 np0005604790 systemd-logind[793]: Removed session 6.
Feb  2 04:14:52 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:14:57 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:14:57 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:14:57 np0005604790 systemd[1]: man-db-cache-update.service: Consumed 52.773s CPU time.
Feb  2 04:14:57 np0005604790 systemd[1]: run-r2f3c0d7373264f60b805ddee4523cad4.service: Deactivated successfully.
Feb  2 04:15:12 np0005604790 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 04:15:51 np0005604790 systemd[1]: Starting Cleanup of Temporary Directories...
Feb  2 04:15:51 np0005604790 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb  2 04:15:51 np0005604790 systemd[1]: Finished Cleanup of Temporary Directories.
Feb  2 04:15:51 np0005604790 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb  2 04:18:07 np0005604790 systemd-logind[793]: New session 7 of user zuul.
Feb  2 04:18:07 np0005604790 systemd[1]: Started Session 7 of User zuul.
Feb  2 04:18:07 np0005604790 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:18:09 np0005604790 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:09 np0005604790 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:10 np0005604790 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:10 np0005604790 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:10 np0005604790 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:11 np0005604790 python3[30460]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:11 np0005604790 python3[30486]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:12 np0005604790 python3[30559]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:12 np0005604790 python3[30585]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:12 np0005604790 python3[30658]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:13 np0005604790 python3[30684]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:13 np0005604790 python3[30757]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:13 np0005604790 python3[30783]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:18:14 np0005604790 python3[30856]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1770023889.0756674-33994-15945489041638/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:18:24 np0005604790 python3[30914]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:23:24 np0005604790 systemd[1]: session-7.scope: Deactivated successfully.
Feb  2 04:23:24 np0005604790 systemd[1]: session-7.scope: Consumed 5.114s CPU time.
Feb  2 04:23:24 np0005604790 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Feb  2 04:23:24 np0005604790 systemd-logind[793]: Removed session 7.
Feb  2 04:29:51 np0005604790 systemd-logind[793]: New session 8 of user zuul.
Feb  2 04:29:51 np0005604790 systemd[1]: Started Session 8 of User zuul.
Feb  2 04:29:52 np0005604790 python3.9[31075]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:29:53 np0005604790 python3.9[31256]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:30:00 np0005604790 systemd[1]: session-8.scope: Deactivated successfully.
Feb  2 04:30:00 np0005604790 systemd[1]: session-8.scope: Consumed 7.445s CPU time.
Feb  2 04:30:00 np0005604790 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Feb  2 04:30:00 np0005604790 systemd-logind[793]: Removed session 8.
Feb  2 04:30:16 np0005604790 systemd-logind[793]: New session 9 of user zuul.
Feb  2 04:30:16 np0005604790 systemd[1]: Started Session 9 of User zuul.
Feb  2 04:30:17 np0005604790 python3.9[31466]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 04:30:18 np0005604790 python3.9[31640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:30:19 np0005604790 python3.9[31792]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:30:20 np0005604790 python3.9[31945]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:30:20 np0005604790 python3.9[32097]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:30:21 np0005604790 python3.9[32249]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:30:22 np0005604790 python3.9[32372]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024621.0204482-172-28244217905674/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:30:22 np0005604790 python3.9[32524]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:30:23 np0005604790 python3.9[32680]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:30:24 np0005604790 python3.9[32832]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:30:25 np0005604790 python3.9[32982]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:30:29 np0005604790 python3.9[33236]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:30:30 np0005604790 python3.9[33386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:30:31 np0005604790 python3.9[33540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:30:33 np0005604790 python3.9[33698]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:30:33 np0005604790 python3.9[33782]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:31:14 np0005604790 systemd[1]: Reloading.
Feb  2 04:31:14 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:31:14 np0005604790 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb  2 04:31:15 np0005604790 systemd[1]: Reloading.
Feb  2 04:31:15 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:31:15 np0005604790 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb  2 04:31:15 np0005604790 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb  2 04:31:15 np0005604790 systemd[1]: Reloading.
Feb  2 04:31:15 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:31:15 np0005604790 systemd[1]: Listening on LVM2 poll daemon socket.
Feb  2 04:31:15 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:31:15 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:31:15 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:32:10 np0005604790 kernel: SELinux:  Converting 2726 SID table entries...
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:32:10 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:32:10 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb  2 04:32:10 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:32:10 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:32:10 np0005604790 systemd[1]: Reloading.
Feb  2 04:32:10 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:32:10 np0005604790 systemd[1]: Starting dnf makecache...
Feb  2 04:32:10 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:32:11 np0005604790 dnf[34425]: Failed determining last makecache time.
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-barbican-42b4c41831408a8e323 110 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-glean-642fffe0203a8ffcc2443db52 126 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-cinder-1c00d6490d88e436f26ef 126 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-stevedore-c4acc5639fd2329372142 125 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-cloudkitty-tests-tempest-783703 129 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-diskimage-builder-61b717cc45660834fe9a 115 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-nova-eaa65f0b85123a4ee343246  87 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-designate-tests-tempest-347fdbc 118 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-glance-1fd12c29b339f30fe823e 134 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 121 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-manila-d783d10e75495b73866db 129 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-neutron-95cadbd379667c8520c8 123 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-octavia-5975097dd4b021385178 130 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-watcher-c014f81a8647287f6dcc 141 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-tcib-78032d201b02cee27e8e644c61 140 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 130 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:32:11 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:32:11 np0005604790 systemd[1]: run-r3c58daf786dc490dbbc71bb45bc9091f.service: Deactivated successfully.
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-swift-dc98a8463506ac520c469a 139 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-python-tempestconf-8515371b7cceebd4282 145 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: delorean-openstack-heat-ui-013accbfd179753bc3f0 136 kB/s | 3.0 kB     00:00
Feb  2 04:32:11 np0005604790 dnf[34425]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.7 kB     00:00
Feb  2 04:32:11 np0005604790 python3.9[35329]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:32:11 np0005604790 dnf[34425]: CentOS Stream 9 - AppStream                      61 kB/s | 6.8 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: CentOS Stream 9 - CRB                            58 kB/s | 6.6 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: CentOS Stream 9 - Extras packages                76 kB/s | 7.3 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: dlrn-antelope-testing                           105 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: dlrn-antelope-build-deps                        103 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: centos9-rabbitmq                                 93 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: centos9-storage                                  13 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: centos9-opstools                                 92 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: NFV SIG OpenvSwitch                              91 kB/s | 3.0 kB     00:00
Feb  2 04:32:12 np0005604790 dnf[34425]: repo-setup-centos-appstream                     113 kB/s | 4.4 kB     00:00
Feb  2 04:32:13 np0005604790 dnf[34425]: repo-setup-centos-baseos                        184 kB/s | 3.9 kB     00:00
Feb  2 04:32:13 np0005604790 dnf[34425]: repo-setup-centos-highavailability              185 kB/s | 3.9 kB     00:00
Feb  2 04:32:13 np0005604790 dnf[34425]: repo-setup-centos-powertools                    175 kB/s | 4.3 kB     00:00
Feb  2 04:32:13 np0005604790 dnf[34425]: Extra Packages for Enterprise Linux 9 - x86_64   97 kB/s |  30 kB     00:00
Feb  2 04:32:13 np0005604790 dnf[34425]: Metadata cache created.
Feb  2 04:32:14 np0005604790 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb  2 04:32:14 np0005604790 systemd[1]: Finished dnf makecache.
Feb  2 04:32:14 np0005604790 systemd[1]: dnf-makecache.service: Consumed 1.872s CPU time.
Feb  2 04:32:14 np0005604790 python3.9[35632]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 04:32:15 np0005604790 python3.9[35785]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 04:32:17 np0005604790 python3.9[35939]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:32:18 np0005604790 python3.9[36091]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 04:32:19 np0005604790 python3.9[36243]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:32:20 np0005604790 python3.9[36395]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:32:25 np0005604790 python3.9[36518]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024740.1728334-661-209050162993815/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:32:26 np0005604790 python3.9[36671]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:32:27 np0005604790 python3.9[36823]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:32:27 np0005604790 python3.9[36976]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:32:29 np0005604790 python3.9[37128]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 04:32:29 np0005604790 python3.9[37281]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:32:29 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:32:30 np0005604790 python3.9[37440]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 04:32:31 np0005604790 python3.9[37600]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 04:32:32 np0005604790 python3.9[37753]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:32:32 np0005604790 python3.9[37911]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 04:32:33 np0005604790 python3.9[38063]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:32:36 np0005604790 python3.9[38216]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:32:36 np0005604790 python3.9[38368]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:32:37 np0005604790 python3.9[38491]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770024756.2284799-1018-30127866893043/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:32:38 np0005604790 python3.9[38643]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:32:38 np0005604790 systemd[1]: Starting Load Kernel Modules...
Feb  2 04:32:38 np0005604790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  2 04:32:38 np0005604790 kernel: Bridge firewalling registered
Feb  2 04:32:38 np0005604790 systemd-modules-load[38647]: Inserted module 'br_netfilter'
Feb  2 04:32:38 np0005604790 systemd[1]: Finished Load Kernel Modules.
Feb  2 04:32:39 np0005604790 python3.9[38803]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:32:39 np0005604790 python3.9[38926]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770024758.6460218-1087-185870238179346/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:32:40 np0005604790 python3.9[39078]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:32:43 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:32:43 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:32:44 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:32:44 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:32:44 np0005604790 systemd[1]: Reloading.
Feb  2 04:32:44 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:32:44 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:32:45 np0005604790 python3.9[40577]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:32:46 np0005604790 python3.9[42010]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 04:32:47 np0005604790 python3.9[42961]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:32:47 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:32:47 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:32:47 np0005604790 systemd[1]: man-db-cache-update.service: Consumed 3.541s CPU time.
Feb  2 04:32:47 np0005604790 systemd[1]: run-r6b04efdf1bdb45aa8a0745eba3fd9fa2.service: Deactivated successfully.
Feb  2 04:32:47 np0005604790 python3.9[43298]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:32:47 np0005604790 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 04:32:48 np0005604790 systemd[1]: Starting Authorization Manager...
Feb  2 04:32:48 np0005604790 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 04:32:48 np0005604790 polkitd[43515]: Started polkitd version 0.117
Feb  2 04:32:48 np0005604790 systemd[1]: Started Authorization Manager.
Feb  2 04:32:49 np0005604790 python3.9[43685]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:32:49 np0005604790 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 04:32:49 np0005604790 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 04:32:49 np0005604790 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 04:32:49 np0005604790 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 04:32:49 np0005604790 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 04:32:51 np0005604790 python3.9[43847]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 04:32:55 np0005604790 python3.9[43999]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:32:55 np0005604790 systemd[1]: Reloading.
Feb  2 04:32:55 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:32:56 np0005604790 python3.9[44189]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:32:56 np0005604790 systemd[1]: Reloading.
Feb  2 04:32:56 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:32:57 np0005604790 python3.9[44378]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:32:57 np0005604790 python3.9[44531]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:32:57 np0005604790 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb  2 04:32:58 np0005604790 python3.9[44684]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:33:00 np0005604790 python3.9[44846]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:33:01 np0005604790 python3.9[44999]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:33:01 np0005604790 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  2 04:33:01 np0005604790 systemd[1]: Stopped Apply Kernel Variables.
Feb  2 04:33:01 np0005604790 systemd[1]: Stopping Apply Kernel Variables...
Feb  2 04:33:01 np0005604790 systemd[1]: Starting Apply Kernel Variables...
Feb  2 04:33:01 np0005604790 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  2 04:33:01 np0005604790 systemd[1]: Finished Apply Kernel Variables.
Feb  2 04:33:02 np0005604790 systemd[1]: session-9.scope: Deactivated successfully.
Feb  2 04:33:02 np0005604790 systemd[1]: session-9.scope: Consumed 1min 59.855s CPU time.
Feb  2 04:33:02 np0005604790 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Feb  2 04:33:02 np0005604790 systemd-logind[793]: Removed session 9.
Feb  2 04:33:07 np0005604790 systemd-logind[793]: New session 10 of user zuul.
Feb  2 04:33:07 np0005604790 systemd[1]: Started Session 10 of User zuul.
Feb  2 04:33:08 np0005604790 python3.9[45183]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:33:10 np0005604790 python3.9[45339]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 04:33:11 np0005604790 python3.9[45492]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:33:12 np0005604790 python3.9[45650]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 04:33:13 np0005604790 python3.9[45810]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:33:14 np0005604790 python3.9[45894]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 04:33:16 np0005604790 python3.9[46057]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:33:27 np0005604790 kernel: SELinux:  Converting 2739 SID table entries...
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:33:27 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:33:27 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb  2 04:33:27 np0005604790 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb  2 04:33:28 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:33:28 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:33:28 np0005604790 systemd[1]: Reloading.
Feb  2 04:33:28 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:33:28 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:33:29 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:33:29 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:33:29 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:33:29 np0005604790 systemd[1]: run-rfb5c2658032444ee92d5fe2b7a3afbf1.service: Deactivated successfully.
Feb  2 04:33:30 np0005604790 python3.9[47156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:33:30 np0005604790 systemd[1]: Reloading.
Feb  2 04:33:30 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:33:30 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:33:30 np0005604790 systemd[1]: Starting Open vSwitch Database Unit...
Feb  2 04:33:30 np0005604790 chown[47197]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb  2 04:33:30 np0005604790 ovs-ctl[47202]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb  2 04:33:30 np0005604790 ovs-ctl[47202]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb  2 04:33:30 np0005604790 ovs-ctl[47202]: Starting ovsdb-server [  OK  ]
Feb  2 04:33:30 np0005604790 ovs-vsctl[47251]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb  2 04:33:31 np0005604790 ovs-vsctl[47271]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"031ca08d-19ea-44b4-b1bd-33ab088eb6a6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb  2 04:33:31 np0005604790 ovs-ctl[47202]: Configuring Open vSwitch system IDs [  OK  ]
Feb  2 04:33:31 np0005604790 ovs-ctl[47202]: Enabling remote OVSDB managers [  OK  ]
Feb  2 04:33:31 np0005604790 ovs-vsctl[47277]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 04:33:31 np0005604790 systemd[1]: Started Open vSwitch Database Unit.
Feb  2 04:33:31 np0005604790 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb  2 04:33:31 np0005604790 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb  2 04:33:31 np0005604790 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb  2 04:33:31 np0005604790 kernel: openvswitch: Open vSwitch switching datapath
Feb  2 04:33:31 np0005604790 ovs-ctl[47321]: Inserting openvswitch module [  OK  ]
Feb  2 04:33:31 np0005604790 ovs-ctl[47290]: Starting ovs-vswitchd [  OK  ]
Feb  2 04:33:31 np0005604790 ovs-vsctl[47338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb  2 04:33:31 np0005604790 ovs-ctl[47290]: Enabling remote OVSDB managers [  OK  ]
Feb  2 04:33:31 np0005604790 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb  2 04:33:31 np0005604790 systemd[1]: Starting Open vSwitch...
Feb  2 04:33:31 np0005604790 systemd[1]: Finished Open vSwitch.
Feb  2 04:33:32 np0005604790 python3.9[47490]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:33:33 np0005604790 python3.9[47642]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 04:33:34 np0005604790 kernel: SELinux:  Converting 2753 SID table entries...
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:33:34 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:33:35 np0005604790 python3.9[47797]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:33:36 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb  2 04:33:36 np0005604790 python3.9[47955]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:33:38 np0005604790 python3.9[48108]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:33:39 np0005604790 python3.9[48395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 04:33:40 np0005604790 python3.9[48545]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:33:41 np0005604790 python3.9[48699]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:33:42 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:33:42 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:33:42 np0005604790 systemd[1]: Reloading.
Feb  2 04:33:43 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:33:43 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:33:43 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:33:43 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:33:43 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:33:43 np0005604790 systemd[1]: run-rf2d9e63e59774f628dc3c2e915dfc7da.service: Deactivated successfully.
Feb  2 04:33:44 np0005604790 python3.9[49015]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:33:44 np0005604790 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb  2 04:33:44 np0005604790 systemd[1]: Stopped Network Manager Wait Online.
Feb  2 04:33:44 np0005604790 systemd[1]: Stopping Network Manager Wait Online...
Feb  2 04:33:44 np0005604790 systemd[1]: Stopping Network Manager...
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5089] caught SIGTERM, shutting down normally.
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5102] dhcp4 (eth0): canceled DHCP transaction
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5103] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5103] dhcp4 (eth0): state changed no lease
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5106] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 04:33:44 np0005604790 NetworkManager[7203]: <info>  [1770024824.5158] exiting (success)
Feb  2 04:33:44 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:33:44 np0005604790 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb  2 04:33:44 np0005604790 systemd[1]: Stopped Network Manager.
Feb  2 04:33:44 np0005604790 systemd[1]: NetworkManager.service: Consumed 15.473s CPU time, 4.1M memory peak, read 0B from disk, written 24.5K to disk.
Feb  2 04:33:44 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:33:44 np0005604790 systemd[1]: Starting Network Manager...
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.5612] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:1dacd1c7-39aa-475f-946e-ec901b3ee402)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.5613] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.5650] manager[0x55d30c127000]: monitoring kernel firmware directory '/lib/firmware'.
Feb  2 04:33:44 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 04:33:44 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6382] hostname: hostname: using hostnamed
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6382] hostname: static hostname changed from (none) to "compute-0"
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6387] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6391] manager[0x55d30c127000]: rfkill: Wi-Fi hardware radio set enabled
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6392] manager[0x55d30c127000]: rfkill: WWAN hardware radio set enabled
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6408] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6415] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6416] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6416] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6417] manager: Networking is enabled by state file
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6419] settings: Loaded settings plugin: keyfile (internal)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6422] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6450] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6457] dhcp: init: Using DHCP client 'internal'
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6459] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6465] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6471] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6477] device (lo): Activation: starting connection 'lo' (3fd945dd-23ac-4177-bd37-9d87c9c02d55)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6483] device (eth0): carrier: link connected
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6486] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6491] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6492] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6500] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6506] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6510] device (eth1): carrier: link connected
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6513] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6518] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (75fa72a4-896a-5876-a9b3-438a144045af) (indicated)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6519] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6523] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6529] device (eth1): Activation: starting connection 'ci-private-network' (75fa72a4-896a-5876-a9b3-438a144045af)
Feb  2 04:33:44 np0005604790 systemd[1]: Started Network Manager.
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6534] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6543] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6547] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6549] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6552] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6556] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6558] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6569] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6573] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6578] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6581] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6586] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6595] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6612] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6613] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6615] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6620] device (lo): Activation: successful, device activated.
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6626] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6630] manager: NetworkManager state is now CONNECTED_LOCAL
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6633] device (eth1): Activation: successful, device activated.
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6642] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6648] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6710] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 systemd[1]: Starting Network Manager Wait Online...
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6726] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6727] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6731] manager: NetworkManager state is now CONNECTED_SITE
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6735] device (eth0): Activation: successful, device activated.
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6739] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb  2 04:33:44 np0005604790 NetworkManager[49024]: <info>  [1770024824.6742] manager: startup complete
Feb  2 04:33:44 np0005604790 systemd[1]: Finished Network Manager Wait Online.
Feb  2 04:33:45 np0005604790 python3.9[49241]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:33:49 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:33:49 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:33:49 np0005604790 systemd[1]: Reloading.
Feb  2 04:33:49 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:33:49 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:33:49 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:33:50 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:33:50 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:33:50 np0005604790 systemd[1]: run-r570b6c39accb4d2ba6845c0652e05db0.service: Deactivated successfully.
Feb  2 04:33:51 np0005604790 python3.9[49703]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:33:52 np0005604790 python3.9[49855]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:53 np0005604790 python3.9[50009]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:53 np0005604790 python3.9[50161]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:54 np0005604790 python3.9[50313]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:54 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:33:54 np0005604790 python3.9[50465]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:55 np0005604790 python3.9[50617]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:33:56 np0005604790 python3.9[50740]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024835.1651385-642-269914375337892/.source _original_basename=.k12frxse follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:57 np0005604790 python3.9[50892]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:33:58 np0005604790 python3.9[51044]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb  2 04:33:58 np0005604790 python3.9[51196]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:34:00 np0005604790 python3.9[51623]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb  2 04:34:02 np0005604790 ansible-async_wrapper.py[51798]: Invoked with j483894105071 300 /home/zuul/.ansible/tmp/ansible-tmp-1770024841.351275-840-153405442940794/AnsiballZ_edpm_os_net_config.py _
Feb  2 04:34:02 np0005604790 ansible-async_wrapper.py[51801]: Starting module and watcher
Feb  2 04:34:02 np0005604790 ansible-async_wrapper.py[51801]: Start watching 51802 (300)
Feb  2 04:34:02 np0005604790 ansible-async_wrapper.py[51802]: Start module (51802)
Feb  2 04:34:02 np0005604790 ansible-async_wrapper.py[51798]: Return async_wrapper task started.
Feb  2 04:34:02 np0005604790 python3.9[51803]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Feb  2 04:34:03 np0005604790 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb  2 04:34:03 np0005604790 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb  2 04:34:03 np0005604790 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb  2 04:34:03 np0005604790 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb  2 04:34:03 np0005604790 kernel: cfg80211: failed to load regulatory.db
Feb  2 04:34:03 np0005604790 NetworkManager[49024]: <info>  [1770024843.9723] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Feb  2 04:34:03 np0005604790 NetworkManager[49024]: <info>  [1770024843.9742] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0328] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0330] audit: op="connection-add" uuid="58e11d47-1660-482b-9a93-f6a3975be1f1" name="br-ex-br" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0344] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0346] audit: op="connection-add" uuid="597963f7-88c7-4f1c-93c0-f7764fdbc7ba" name="br-ex-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0357] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0359] audit: op="connection-add" uuid="d6213de7-0d46-4254-ae07-116bc26a07ef" name="eth1-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0369] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0371] audit: op="connection-add" uuid="c99fa69d-832f-4d6a-b3c8-310e5d6e6326" name="vlan20-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0382] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0384] audit: op="connection-add" uuid="fd43cc39-9369-4879-9629-c05f772d8367" name="vlan21-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0395] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0396] audit: op="connection-add" uuid="94256ee1-7862-4505-9526-782080308a0a" name="vlan22-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0407] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0409] audit: op="connection-add" uuid="4ba461f8-4a76-4d59-9d52-b2f8e31941af" name="vlan23-port" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0427] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0442] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0443] audit: op="connection-add" uuid="d73aa788-3ad4-411f-a0c5-f9bba08b50db" name="br-ex-if" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0477] audit: op="connection-update" uuid="75fa72a4-896a-5876-a9b3-438a144045af" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.routes,connection.slave-type,connection.controller,connection.master,connection.timestamp,connection.port-type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.routes" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0491] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0493] audit: op="connection-add" uuid="3c2962c6-2b73-4160-8bb5-71b575dcfdf7" name="vlan20-if" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0507] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0509] audit: op="connection-add" uuid="88e6745b-dd78-4bb0-bd5e-b9f097610067" name="vlan21-if" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0524] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0526] audit: op="connection-add" uuid="a0e1f2ae-e034-4c20-8086-7003b4fe2192" name="vlan22-if" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0540] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0542] audit: op="connection-add" uuid="0dceacc8-9ce4-4a5e-8474-6f47a68d58d4" name="vlan23-if" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0555] audit: op="connection-delete" uuid="7223f535-5f63-3095-bc33-c3417f1eebd4" name="Wired connection 1" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0566] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0569] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0575] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0579] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (58e11d47-1660-482b-9a93-f6a3975be1f1)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0579] audit: op="connection-activate" uuid="58e11d47-1660-482b-9a93-f6a3975be1f1" name="br-ex-br" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0581] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0582] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0587] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0590] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (597963f7-88c7-4f1c-93c0-f7764fdbc7ba)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0592] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0593] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0597] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0600] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d6213de7-0d46-4254-ae07-116bc26a07ef)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0603] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0605] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0610] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0615] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (c99fa69d-832f-4d6a-b3c8-310e5d6e6326)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0618] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0619] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0624] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0629] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (fd43cc39-9369-4879-9629-c05f772d8367)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0631] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0632] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0638] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0641] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (94256ee1-7862-4505-9526-782080308a0a)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0647] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0648] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0653] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0657] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4ba461f8-4a76-4d59-9d52-b2f8e31941af)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0659] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0661] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0664] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0670] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0671] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0674] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0679] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d73aa788-3ad4-411f-a0c5-f9bba08b50db)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0680] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0683] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0687] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0689] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0690] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0701] device (eth1): disconnecting for new activation request.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0703] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0706] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0708] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0710] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0712] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0714] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0717] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0721] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3c2962c6-2b73-4160-8bb5-71b575dcfdf7)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0722] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0726] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0728] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0730] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0734] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0736] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0739] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0743] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (88e6745b-dd78-4bb0-bd5e-b9f097610067)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0744] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0747] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0749] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0752] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0755] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0756] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0759] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0764] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (a0e1f2ae-e034-4c20-8086-7003b4fe2192)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0765] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0769] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0771] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0772] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0775] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <warn>  [1770024844.0777] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0780] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0784] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0dceacc8-9ce4-4a5e-8474-6f47a68d58d4)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0786] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0789] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0791] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0792] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0794] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0806] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0808] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0811] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0814] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0820] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0824] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0828] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0831] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0833] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 kernel: ovs-system: entered promiscuous mode
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0837] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0840] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0842] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0843] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0846] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0849] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0852] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 systemd-udevd[51810]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0855] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0858] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0862] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 kernel: Timeout policy base is empty
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0865] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0866] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0871] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0875] dhcp4 (eth0): canceled DHCP transaction
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0875] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0875] dhcp4 (eth0): state changed no lease
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0876] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0884] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0887] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51804 uid=0 result="fail" reason="Device is not activated"
Feb  2 04:34:04 np0005604790 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0913] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0927] device (eth1): disconnecting for new activation request.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0927] audit: op="connection-activate" uuid="75fa72a4-896a-5876-a9b3-438a144045af" name="ci-private-network" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0928] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0938] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0947] dhcp4 (eth0): state changed new lease, address=38.102.83.144
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0950] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.0992] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1028] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1121] device (eth1): Activation: starting connection 'ci-private-network' (75fa72a4-896a-5876-a9b3-438a144045af)
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1131] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1134] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1139] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1141] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1142] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1143] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1144] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1145] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1146] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1149] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1154] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1157] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1161] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1164] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1167] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1170] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1174] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1177] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1182] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1185] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1189] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1191] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 kernel: br-ex: entered promiscuous mode
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1194] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1197] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1202] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1205] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1250] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1252] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1256] device (eth1): Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 kernel: vlan22: entered promiscuous mode
Feb  2 04:34:04 np0005604790 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb  2 04:34:04 np0005604790 systemd-udevd[51808]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1361] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1370] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 kernel: vlan23: entered promiscuous mode
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1414] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1417] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1425] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1441] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb  2 04:34:04 np0005604790 kernel: vlan20: entered promiscuous mode
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1460] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1482] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1502] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1515] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1518] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1526] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1543] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb  2 04:34:04 np0005604790 kernel: vlan21: entered promiscuous mode
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1550] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1556] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1562] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1588] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1609] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1618] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1626] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1632] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1645] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1695] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1697] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb  2 04:34:04 np0005604790 NetworkManager[49024]: <info>  [1770024844.1702] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.3029] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.4574] checkpoint[0x55d30c0fd950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.4576] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.7475] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.7487] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.9379] audit: op="networking-control" arg="global-dns-configuration" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.9403] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.9427] audit: op="networking-control" arg="global-dns-configuration" pid=51804 uid=0 result="success"
Feb  2 04:34:05 np0005604790 python3.9[52163]: ansible-ansible.legacy.async_status Invoked with jid=j483894105071.51798 mode=status _async_dir=/root/.ansible_async
Feb  2 04:34:05 np0005604790 NetworkManager[49024]: <info>  [1770024845.9449] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Feb  2 04:34:06 np0005604790 NetworkManager[49024]: <info>  [1770024846.0887] checkpoint[0x55d30c0fda20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb  2 04:34:06 np0005604790 NetworkManager[49024]: <info>  [1770024846.0892] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51804 uid=0 result="success"
Feb  2 04:34:06 np0005604790 ansible-async_wrapper.py[51802]: Module complete (51802)
Feb  2 04:34:07 np0005604790 ansible-async_wrapper.py[51801]: Done in kid B.
Feb  2 04:34:09 np0005604790 python3.9[52267]: ansible-ansible.legacy.async_status Invoked with jid=j483894105071.51798 mode=status _async_dir=/root/.ansible_async
Feb  2 04:34:09 np0005604790 python3.9[52367]: ansible-ansible.legacy.async_status Invoked with jid=j483894105071.51798 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 04:34:11 np0005604790 python3.9[52519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:34:12 np0005604790 python3.9[52642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024851.0248287-921-275834897799933/.source.returncode _original_basename=.yzn8_vol follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:34:12 np0005604790 python3.9[52795]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:34:13 np0005604790 python3.9[52918]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024852.4000363-969-118867983255740/.source.cfg _original_basename=.vreq6c3l follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:34:14 np0005604790 python3.9[53070]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:34:14 np0005604790 systemd[1]: Reloading Network Manager...
Feb  2 04:34:14 np0005604790 NetworkManager[49024]: <info>  [1770024854.6523] audit: op="reload" arg="0" pid=53074 uid=0 result="success"
Feb  2 04:34:14 np0005604790 NetworkManager[49024]: <info>  [1770024854.6529] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb  2 04:34:14 np0005604790 systemd[1]: Reloaded Network Manager.
Feb  2 04:34:14 np0005604790 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 04:34:15 np0005604790 systemd[1]: session-10.scope: Deactivated successfully.
Feb  2 04:34:15 np0005604790 systemd[1]: session-10.scope: Consumed 44.409s CPU time.
Feb  2 04:34:15 np0005604790 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Feb  2 04:34:15 np0005604790 systemd-logind[793]: Removed session 10.
Feb  2 04:34:20 np0005604790 systemd-logind[793]: New session 11 of user zuul.
Feb  2 04:34:20 np0005604790 systemd[1]: Started Session 11 of User zuul.
Feb  2 04:34:21 np0005604790 python3.9[53260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:34:22 np0005604790 python3.9[53414]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:34:23 np0005604790 python3.9[53608]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:34:24 np0005604790 systemd[1]: session-11.scope: Deactivated successfully.
Feb  2 04:34:24 np0005604790 systemd[1]: session-11.scope: Consumed 2.121s CPU time.
Feb  2 04:34:24 np0005604790 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Feb  2 04:34:24 np0005604790 systemd-logind[793]: Removed session 11.
Feb  2 04:34:24 np0005604790 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb  2 04:34:29 np0005604790 systemd-logind[793]: New session 12 of user zuul.
Feb  2 04:34:29 np0005604790 systemd[1]: Started Session 12 of User zuul.
Feb  2 04:34:30 np0005604790 python3.9[53790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:34:31 np0005604790 python3.9[53944]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:34:32 np0005604790 python3.9[54101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:34:33 np0005604790 python3.9[54185]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:34:35 np0005604790 python3.9[54338]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:34:36 np0005604790 python3.9[54534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:34:37 np0005604790 python3.9[54686]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:34:37 np0005604790 systemd[1]: var-lib-containers-storage-overlay-compat4109752497-merged.mount: Deactivated successfully.
Feb  2 04:34:37 np0005604790 podman[54687]: 2026-02-02 09:34:37.435010273 +0000 UTC m=+0.045670185 system refresh
Feb  2 04:34:38 np0005604790 python3.9[54850]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:34:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:34:39 np0005604790 python3.9[54973]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024877.6941-192-152578617106409/.source.json follow=False _original_basename=podman_network_config.j2 checksum=3d584d7e7a04a2d57ee8dc0ff3a3a1e46dad6c7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:34:39 np0005604790 python3.9[55125]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:34:40 np0005604790 python3.9[55248]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770024879.2774048-237-106379888305844/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:34:41 np0005604790 python3.9[55400]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:34:42 np0005604790 python3.9[55552]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:34:42 np0005604790 python3.9[55704]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:34:43 np0005604790 python3.9[55856]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:34:43 np0005604790 python3.9[56008]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:34:46 np0005604790 python3.9[56161]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:34:46 np0005604790 python3.9[56315]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:34:47 np0005604790 python3.9[56467]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:34:48 np0005604790 python3.9[56619]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:34:49 np0005604790 python3.9[56772]: ansible-service_facts Invoked
Feb  2 04:34:49 np0005604790 network[56789]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:34:49 np0005604790 network[56790]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:34:49 np0005604790 network[56791]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:34:54 np0005604790 python3.9[57243]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:34:57 np0005604790 python3.9[57396]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  2 04:34:58 np0005604790 python3.9[57548]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:34:59 np0005604790 python3.9[57673]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024898.401552-669-263654384428003/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:00 np0005604790 python3.9[57827]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:00 np0005604790 python3.9[57952]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024899.7944024-714-3323267453776/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:02 np0005604790 python3.9[58106]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:04 np0005604790 python3.9[58260]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:35:05 np0005604790 python3.9[58344]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:06 np0005604790 python3.9[58498]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:35:07 np0005604790 python3.9[58582]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:35:07 np0005604790 chronyd[805]: chronyd exiting
Feb  2 04:35:07 np0005604790 systemd[1]: Stopping NTP client/server...
Feb  2 04:35:07 np0005604790 systemd[1]: chronyd.service: Deactivated successfully.
Feb  2 04:35:07 np0005604790 systemd[1]: Stopped NTP client/server.
Feb  2 04:35:07 np0005604790 systemd[1]: Starting NTP client/server...
Feb  2 04:35:07 np0005604790 chronyd[58590]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb  2 04:35:07 np0005604790 chronyd[58590]: Frequency -26.541 +/- 0.392 ppm read from /var/lib/chrony/drift
Feb  2 04:35:07 np0005604790 chronyd[58590]: Loaded seccomp filter (level 2)
Feb  2 04:35:07 np0005604790 systemd[1]: Started NTP client/server.
Feb  2 04:35:08 np0005604790 systemd[1]: session-12.scope: Deactivated successfully.
Feb  2 04:35:08 np0005604790 systemd[1]: session-12.scope: Consumed 22.634s CPU time.
Feb  2 04:35:08 np0005604790 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Feb  2 04:35:08 np0005604790 systemd-logind[793]: Removed session 12.
Feb  2 04:35:13 np0005604790 systemd-logind[793]: New session 13 of user zuul.
Feb  2 04:35:13 np0005604790 systemd[1]: Started Session 13 of User zuul.
Feb  2 04:35:14 np0005604790 python3.9[58771]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:14 np0005604790 python3.9[58923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:15 np0005604790 python3.9[59046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024914.340914-57-70921112661376/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:15 np0005604790 systemd[1]: session-13.scope: Deactivated successfully.
Feb  2 04:35:15 np0005604790 systemd[1]: session-13.scope: Consumed 1.449s CPU time.
Feb  2 04:35:15 np0005604790 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Feb  2 04:35:15 np0005604790 systemd-logind[793]: Removed session 13.
Feb  2 04:35:21 np0005604790 systemd-logind[793]: New session 14 of user zuul.
Feb  2 04:35:21 np0005604790 systemd[1]: Started Session 14 of User zuul.
Feb  2 04:35:22 np0005604790 python3.9[59224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:35:23 np0005604790 python3.9[59380]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:24 np0005604790 python3.9[59555]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:24 np0005604790 python3.9[59678]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1770024923.614577-78-193708550589633/.source.json _original_basename=.f61hc4io follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:25 np0005604790 python3.9[59830]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:26 np0005604790 python3.9[59953]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024925.266355-147-147296121173618/.source _original_basename=.hxkcb5rk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:27 np0005604790 python3.9[60105]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:35:27 np0005604790 python3.9[60257]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:28 np0005604790 python3.9[60380]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770024927.2156584-219-256921237025841/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:35:29 np0005604790 python3.9[60532]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:29 np0005604790 python3.9[60655]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770024928.4007301-219-250510151772213/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:35:30 np0005604790 python3.9[60807]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:30 np0005604790 python3.9[60959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:31 np0005604790 python3.9[61082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024930.4272113-330-100110699427129/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:32 np0005604790 python3.9[61234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:32 np0005604790 python3.9[61357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024931.6674402-375-19658074603207/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:33 np0005604790 python3.9[61509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:33 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:33 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:33 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:33 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:34 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:34 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:34 np0005604790 systemd[1]: Starting EDPM Container Shutdown...
Feb  2 04:35:34 np0005604790 systemd[1]: Finished EDPM Container Shutdown.
Feb  2 04:35:34 np0005604790 python3.9[61736]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:35 np0005604790 python3.9[61859]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024934.317878-444-47112934083136/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:35 np0005604790 python3.9[62011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:36 np0005604790 python3.9[62134]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024935.5350606-489-33558178304265/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:37 np0005604790 python3.9[62286]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:37 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:37 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:37 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:37 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:37 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:37 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:37 np0005604790 systemd[1]: Starting Create netns directory...
Feb  2 04:35:37 np0005604790 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 04:35:37 np0005604790 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 04:35:37 np0005604790 systemd[1]: Finished Create netns directory.
Feb  2 04:35:38 np0005604790 python3.9[62512]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:35:38 np0005604790 network[62529]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:35:38 np0005604790 network[62530]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:35:38 np0005604790 network[62531]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:35:43 np0005604790 python3.9[62793]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:43 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:43 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:43 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:43 np0005604790 systemd[1]: Stopping IPv4 firewall with iptables...
Feb  2 04:35:43 np0005604790 iptables.init[62833]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb  2 04:35:43 np0005604790 iptables.init[62833]: iptables: Flushing firewall rules: [  OK  ]
Feb  2 04:35:43 np0005604790 systemd[1]: iptables.service: Deactivated successfully.
Feb  2 04:35:43 np0005604790 systemd[1]: Stopped IPv4 firewall with iptables.
Feb  2 04:35:44 np0005604790 python3.9[63029]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:45 np0005604790 python3.9[63183]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:35:45 np0005604790 systemd[1]: Reloading.
Feb  2 04:35:45 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:35:45 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:35:45 np0005604790 systemd[1]: Starting Netfilter Tables...
Feb  2 04:35:45 np0005604790 systemd[1]: Finished Netfilter Tables.
Feb  2 04:35:46 np0005604790 python3.9[63374]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:35:47 np0005604790 python3.9[63527]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:48 np0005604790 python3.9[63652]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024947.07356-696-192175601594127/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:49 np0005604790 python3.9[63805]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:35:49 np0005604790 systemd[1]: Reloading OpenSSH server daemon...
Feb  2 04:35:49 np0005604790 systemd[1]: Reloaded OpenSSH server daemon.
Feb  2 04:35:49 np0005604790 python3.9[63961]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:50 np0005604790 python3.9[64113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:50 np0005604790 python3.9[64236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024949.9776266-789-148728962884188/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:52 np0005604790 python3.9[64388]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 04:35:52 np0005604790 systemd[1]: Starting Time & Date Service...
Feb  2 04:35:52 np0005604790 systemd[1]: Started Time & Date Service.
Feb  2 04:35:52 np0005604790 python3.9[64544]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:53 np0005604790 python3.9[64696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:54 np0005604790 python3.9[64819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024953.2318108-894-120535376338038/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:55 np0005604790 python3.9[64971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:55 np0005604790 python3.9[65094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770024954.5585878-939-138844872270415/.source.yaml _original_basename=.so75thco follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:56 np0005604790 python3.9[65246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:56 np0005604790 python3.9[65369]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024955.788162-984-44682329036792/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:35:57 np0005604790 python3.9[65521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:35:58 np0005604790 python3.9[65674]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:35:58 np0005604790 python3[65827]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 04:35:59 np0005604790 python3.9[65979]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:35:59 np0005604790 python3.9[66102]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024959.0505402-1101-134193972539255/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:00 np0005604790 python3.9[66254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:36:01 np0005604790 python3.9[66377]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024960.25469-1146-22430650055394/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:01 np0005604790 python3.9[66531]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:36:02 np0005604790 python3.9[66654]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024961.3463488-1191-278484376980123/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:02 np0005604790 python3.9[66806]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:36:03 np0005604790 python3.9[66929]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024962.5277352-1236-109153407268200/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:04 np0005604790 python3.9[67081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:36:04 np0005604790 python3.9[67204]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770024963.7060359-1281-87166652471314/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:05 np0005604790 python3.9[67356]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:06 np0005604790 python3.9[67508]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:36:06 np0005604790 python3.9[67667]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:07 np0005604790 python3.9[67820]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:08 np0005604790 python3.9[67972]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:09 np0005604790 python3.9[68124]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 04:36:09 np0005604790 python3.9[68277]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 04:36:10 np0005604790 systemd[1]: session-14.scope: Deactivated successfully.
Feb  2 04:36:10 np0005604790 systemd[1]: session-14.scope: Consumed 31.992s CPU time.
Feb  2 04:36:10 np0005604790 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Feb  2 04:36:10 np0005604790 systemd-logind[793]: Removed session 14.
Feb  2 04:36:15 np0005604790 systemd-logind[793]: New session 15 of user zuul.
Feb  2 04:36:15 np0005604790 systemd[1]: Started Session 15 of User zuul.
Feb  2 04:36:16 np0005604790 python3.9[68458]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 04:36:17 np0005604790 python3.9[68610]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:36:18 np0005604790 python3.9[68763]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:36:19 np0005604790 python3.9[68915]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTA16t8OsOL4s99BOiNF3vckRPwnc9DwrgEMUjNAF5ofBbR7O7JlFD47GnI33lZr51vVc0wnvTxhpFA0jVvhKqVWdJ3lApNf34bJmaJBr8uiy/i3Q84MsUtXBLQ0FDCbwgaPnreNbMz3ae+u9H+Z73jQSP+gnQ5oYWhONHgO4HHkF8K7a8Bow3H5qwfbHz8o7mFQmTpYHwOcwhA53BTbh1NiEJZJNSg7wi1hH7vELUAzts1cbF2slTE0nh8XjMogq9ukokrCIKfE+xX7PmAawCuMnfvGX93zF1298pGcUKqvpnIfUOMDGtJtYEZ8sWsr5aH1YXIoJfHuux/YosRx3XDD5oEcpX0nYKVW6bumHsFIS199XAM5LtWWNr2eMcrbZhVwHNdELC6zoL7QjbBQ+2j/+8nJLq9vIghewgO3EFWK3r7kIVQZg8GYLZ/yisH4cvzUTACRXAF+1o2rq+AUfX3nTSsrqyZQUwlnWpc1vsceEO0Lsuac5tvGylnsJBfmM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN317jbKb2FNELHPgcKtyDLq5kCgCZN/b/8qYDuirt4l#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNpgfrlTfGut7rGFnGEpIiXrs2U1SQK0Fr1bAmmw8notvdnn6jtGfPfwX96hGwcOu4AlAS/i7X7XgbLw573Ooww=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXvxaVTYbHTHv+9EzKdF3T8+Yr2otW2YLuSqNTF+yJaKACfB7wDlIhKDGTHiU1FDrkO4tJ+R3OL/2ZXoIlxp5JSdCgcb42X+5PTj1wPkayVlQW7e0wQvT3kYhrcPtjLgk4T39/sionMGYUat45idwoB6hUSPLdk/L5+n0/3LEg1lByOM/B1/p8wGzHn6H9CWoIP3Ctd6lmrxtIVU1u+pxiBVQCcMjw5gtqsB54l670fL7El5XEkqjRjKHhylw9QTYN3AWMKuQKwcjClm/57/SoFMP7o52r653wGDH9cpvDgs0RYG4bA1mGY5OMkYbDJfcy0CViKEu5qWW4cTBLh/Z88D2EuNlINj3Q1YJk3RwF6vYl31MMsbBW10YhIiBJrA5XF0BLARqBOZ1e6v7JKTSwa7wGGtRzEzbY+me9zl6ZhhDru/I+h24J4MeBA07HvQIS2v8O95tPz76YZJ3DkWlywFWbALG8M4+fkpuQtvVpBZMgdvIWW0kfXO/grGnrgY8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3OEs+fDFWrKRKifY4uXYtOpS/6/8E88qPQNs1apj/z#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy9hRh0QDNcy30491f4FwmL+9BopSuPxbkVyWhY9VytT/FG5rm9/DLYyukpd9IKttcZyerq0gzfokDrht76FB4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpaaLVd9Gqbxcksz46sKNkp3Eu2TY3fUjtOhbkLQru93qJt/RNDTocNiUrE9VAj/UXp9dZqSHg1Hr7ScqXu7zqgZ9i+mq6N7P7QR+ZkN8jLQSybnPztI7X/QWaPhT0j1ArMrYk2F2Me+kAQiFL0GoR2d8udRElL8YKKIYQ6zjC/h2ZsU0WyVET9uiTgeMP/njtMzRSgO2Wp6no4KqJEOMSEY1lgURjVsMWkTr4hGz523SooA41GzquuNamnj1ELwKZSAH+TtVgI8oFJ2T+5TZiE/oW2MizbBwjKA3V5DlnGOEG49eG+LhZ/eWb6jQ7OnJARA/iLU/FsJ+CaGSbRK20/OWXP4JSZu7liaD0DIHM0DwrjEnQcXI6SbfAoAQ494KFtZvFamem7CPtrVhgNAKqybRbDcEQGpDxQgrWeA3m4HyGIBym+IvMUfYlNke9frCkwNpXRH93TK6E/ziPFrBHKkdRcFxVdsG2u1Y+adxOQk7KCjq/skzXBPCPDaHnzBM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIKtQmhiX/LRkxZONUn47u07V1HNePVW1EWKmTbmuGuY#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE0cPV3BwiB9Cc5Ne48bCCSZwMzF/hH7iFXwAiP/TK2pzWYsdZw1mOSJ+vDu1KclkDtQKmwN6Cu0N7j7domqlzE=#012 create=True mode=0644 path=/tmp/ansible.gux3sgdn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:19 np0005604790 python3.9[69067]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gux3sgdn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:36:20 np0005604790 python3.9[69221]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gux3sgdn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:21 np0005604790 systemd[1]: session-15.scope: Deactivated successfully.
Feb  2 04:36:21 np0005604790 systemd[1]: session-15.scope: Consumed 3.167s CPU time.
Feb  2 04:36:21 np0005604790 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Feb  2 04:36:21 np0005604790 systemd-logind[793]: Removed session 15.
Feb  2 04:36:22 np0005604790 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 04:36:26 np0005604790 systemd-logind[793]: New session 16 of user zuul.
Feb  2 04:36:26 np0005604790 systemd[1]: Started Session 16 of User zuul.
Feb  2 04:36:27 np0005604790 python3.9[69401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:36:29 np0005604790 python3.9[69557]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 04:36:29 np0005604790 python3.9[69711]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:36:30 np0005604790 python3.9[69864]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:36:31 np0005604790 python3.9[70017]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:36:32 np0005604790 python3.9[70171]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:36:33 np0005604790 python3.9[70326]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:36:33 np0005604790 systemd[1]: session-16.scope: Deactivated successfully.
Feb  2 04:36:33 np0005604790 systemd[1]: session-16.scope: Consumed 4.050s CPU time.
Feb  2 04:36:33 np0005604790 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Feb  2 04:36:33 np0005604790 systemd-logind[793]: Removed session 16.
Feb  2 04:36:38 np0005604790 systemd-logind[793]: New session 17 of user zuul.
Feb  2 04:36:38 np0005604790 systemd[1]: Started Session 17 of User zuul.
Feb  2 04:36:40 np0005604790 python3.9[70504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:36:40 np0005604790 python3.9[70660]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:36:41 np0005604790 python3.9[70744]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 04:36:43 np0005604790 python3.9[70895]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:36:45 np0005604790 python3.9[71046]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:36:45 np0005604790 python3.9[71196]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:36:45 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:36:46 np0005604790 python3.9[71347]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:36:47 np0005604790 systemd[1]: session-17.scope: Deactivated successfully.
Feb  2 04:36:47 np0005604790 systemd[1]: session-17.scope: Consumed 5.079s CPU time.
Feb  2 04:36:47 np0005604790 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Feb  2 04:36:47 np0005604790 systemd-logind[793]: Removed session 17.
Feb  2 04:36:55 np0005604790 systemd-logind[793]: New session 18 of user zuul.
Feb  2 04:36:55 np0005604790 systemd[1]: Started Session 18 of User zuul.
Feb  2 04:37:01 np0005604790 python3[72113]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:37:02 np0005604790 python3[72208]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 04:37:04 np0005604790 python3[72235]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:04 np0005604790 python3[72261]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:37:04 np0005604790 kernel: loop: module loaded
Feb  2 04:37:04 np0005604790 kernel: loop3: detected capacity change from 0 to 41943040
Feb  2 04:37:04 np0005604790 python3[72296]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:37:05 np0005604790 lvm[72299]: PV /dev/loop3 not used.
Feb  2 04:37:05 np0005604790 lvm[72301]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:37:05 np0005604790 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb  2 04:37:05 np0005604790 lvm[72311]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:37:05 np0005604790 lvm[72311]: VG ceph_vg0 finished
Feb  2 04:37:05 np0005604790 lvm[72310]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb  2 04:37:05 np0005604790 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb  2 04:37:05 np0005604790 python3[72389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:37:06 np0005604790 python3[72462]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025025.5165863-36891-117436320196782/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:37:06 np0005604790 python3[72512]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:37:06 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:07 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:07 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:07 np0005604790 systemd[1]: Starting Ceph OSD losetup...
Feb  2 04:37:07 np0005604790 bash[72552]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Feb  2 04:37:07 np0005604790 systemd[1]: Finished Ceph OSD losetup.
Feb  2 04:37:07 np0005604790 lvm[72553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:37:07 np0005604790 lvm[72553]: VG ceph_vg0 finished
Feb  2 04:37:09 np0005604790 python3[72577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:37:11 np0005604790 python3[72670]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 04:37:13 np0005604790 python3[72727]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb  2 04:37:16 np0005604790 chronyd[58590]: Selected source 142.4.192.253 (pool.ntp.org)
Feb  2 04:37:17 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:37:17 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:37:17 np0005604790 python3[72841]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:17 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:37:17 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:37:17 np0005604790 systemd[1]: run-r2b1a32b2199a4e3a93607423134a3a45.service: Deactivated successfully.
Feb  2 04:37:17 np0005604790 python3[72870]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:37:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:18 np0005604790 python3[72935]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:37:18 np0005604790 python3[72961]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:37:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:19 np0005604790 python3[73039]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:37:20 np0005604790 python3[73112]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025039.387792-37083-138596983528594/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:37:20 np0005604790 python3[73214]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:37:21 np0005604790 python3[73287]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025040.5507035-37101-473953240201/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:37:21 np0005604790 python3[73337]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:21 np0005604790 python3[73365]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:22 np0005604790 python3[73393]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:22 np0005604790 python3[73419]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:37:23 np0005604790 python3[73445]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:37:23 np0005604790 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 04:37:23 np0005604790 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 04:37:23 np0005604790 systemd-logind[793]: New session 19 of user ceph-admin.
Feb  2 04:37:23 np0005604790 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 04:37:23 np0005604790 systemd[1]: Starting User Manager for UID 42477...
Feb  2 04:37:23 np0005604790 systemd[73453]: Queued start job for default target Main User Target.
Feb  2 04:37:23 np0005604790 systemd[73453]: Created slice User Application Slice.
Feb  2 04:37:23 np0005604790 systemd[73453]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:37:23 np0005604790 systemd[73453]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 04:37:23 np0005604790 systemd[73453]: Reached target Paths.
Feb  2 04:37:23 np0005604790 systemd[73453]: Reached target Timers.
Feb  2 04:37:23 np0005604790 systemd[73453]: Starting D-Bus User Message Bus Socket...
Feb  2 04:37:23 np0005604790 systemd[73453]: Starting Create User's Volatile Files and Directories...
Feb  2 04:37:23 np0005604790 systemd[73453]: Finished Create User's Volatile Files and Directories.
Feb  2 04:37:23 np0005604790 systemd[73453]: Listening on D-Bus User Message Bus Socket.
Feb  2 04:37:23 np0005604790 systemd[73453]: Reached target Sockets.
Feb  2 04:37:23 np0005604790 systemd[73453]: Reached target Basic System.
Feb  2 04:37:23 np0005604790 systemd[73453]: Reached target Main User Target.
Feb  2 04:37:23 np0005604790 systemd[73453]: Startup finished in 130ms.
Feb  2 04:37:23 np0005604790 systemd[1]: Started User Manager for UID 42477.
Feb  2 04:37:23 np0005604790 systemd[1]: Started Session 19 of User ceph-admin.
Feb  2 04:37:23 np0005604790 systemd[1]: session-19.scope: Deactivated successfully.
Feb  2 04:37:23 np0005604790 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Feb  2 04:37:23 np0005604790 systemd-logind[793]: Removed session 19.
Feb  2 04:37:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-compat2349108034-lower\x2dmapped.mount: Deactivated successfully.
Feb  2 04:37:33 np0005604790 systemd[1]: Stopping User Manager for UID 42477...
Feb  2 04:37:33 np0005604790 systemd[73453]: Activating special unit Exit the Session...
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped target Main User Target.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped target Basic System.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped target Paths.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped target Sockets.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped target Timers.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 04:37:33 np0005604790 systemd[73453]: Closed D-Bus User Message Bus Socket.
Feb  2 04:37:33 np0005604790 systemd[73453]: Stopped Create User's Volatile Files and Directories.
Feb  2 04:37:33 np0005604790 systemd[73453]: Removed slice User Application Slice.
Feb  2 04:37:33 np0005604790 systemd[73453]: Reached target Shutdown.
Feb  2 04:37:33 np0005604790 systemd[73453]: Finished Exit the Session.
Feb  2 04:37:33 np0005604790 systemd[73453]: Reached target Exit the Session.
Feb  2 04:37:33 np0005604790 systemd[1]: user@42477.service: Deactivated successfully.
Feb  2 04:37:33 np0005604790 systemd[1]: Stopped User Manager for UID 42477.
Feb  2 04:37:33 np0005604790 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb  2 04:37:33 np0005604790 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb  2 04:37:33 np0005604790 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb  2 04:37:33 np0005604790 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb  2 04:37:33 np0005604790 systemd[1]: Removed slice User Slice of UID 42477.
Feb  2 04:37:40 np0005604790 podman[73545]: 2026-02-02 09:37:40.341755318 +0000 UTC m=+16.351077520 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.424559926 +0000 UTC m=+0.061038311 container create f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 04:37:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck895247093-merged.mount: Deactivated successfully.
Feb  2 04:37:40 np0005604790 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb  2 04:37:40 np0005604790 systemd[1]: Started libpod-conmon-f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d.scope.
Feb  2 04:37:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.397992051 +0000 UTC m=+0.034470476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.536076466 +0000 UTC m=+0.172554911 container init f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.544640973 +0000 UTC m=+0.181119348 container start f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.548371972 +0000 UTC m=+0.184850407 container attach f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:37:40 np0005604790 serene_edison[73627]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Feb  2 04:37:40 np0005604790 systemd[1]: libpod-f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d.scope: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.657095798 +0000 UTC m=+0.293574143 container died f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:37:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-70b04a9e1a05a31f9e28e23a9b8101c59b900a361e88471bab9e4f76a09405af-merged.mount: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73610]: 2026-02-02 09:37:40.706291883 +0000 UTC m=+0.342770268 container remove f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d (image=quay.io/ceph/ceph:v19, name=serene_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:40 np0005604790 systemd[1]: libpod-conmon-f3b8f673faa0fe00f1c1ae89241c560711ed2feb29fd54b921481afa9a1a4f8d.scope: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.771115604 +0000 UTC m=+0.048589191 container create d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:37:40 np0005604790 systemd[1]: Started libpod-conmon-d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980.scope.
Feb  2 04:37:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.835280497 +0000 UTC m=+0.112754094 container init d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.745657838 +0000 UTC m=+0.023131485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.842597291 +0000 UTC m=+0.120070868 container start d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.845996731 +0000 UTC m=+0.123470348 container attach d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:37:40 np0005604790 zealous_wright[73660]: 167 167
Feb  2 04:37:40 np0005604790 systemd[1]: libpod-d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980.scope: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.847732837 +0000 UTC m=+0.125206414 container died d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:37:40 np0005604790 podman[73644]: 2026-02-02 09:37:40.88929738 +0000 UTC m=+0.166770947 container remove d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980 (image=quay.io/ceph/ceph:v19, name=zealous_wright, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:40 np0005604790 systemd[1]: libpod-conmon-d8851c8fc6933861738fdec92556deb949d6c3ff0dde2751f24a9492f98bb980.scope: Deactivated successfully.
Feb  2 04:37:40 np0005604790 podman[73677]: 2026-02-02 09:37:40.958144847 +0000 UTC m=+0.049422302 container create 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:40 np0005604790 systemd[1]: Started libpod-conmon-4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8.scope.
Feb  2 04:37:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:41.021271793 +0000 UTC m=+0.112549248 container init 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:41.027147189 +0000 UTC m=+0.118424644 container start 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:41.03057261 +0000 UTC m=+0.121850065 container attach 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:40.939906783 +0000 UTC m=+0.031184218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:41 np0005604790 youthful_jepsen[73693]: AQBlcIBp3wNRAxAAvxsclybeLGI1Z9Hxki36SA==
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:41.058748408 +0000 UTC m=+0.150025863 container died 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:37:41 np0005604790 podman[73677]: 2026-02-02 09:37:41.102964931 +0000 UTC m=+0.194242386 container remove 4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8 (image=quay.io/ceph/ceph:v19, name=youthful_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-conmon-4714fad6dbb82b988e944e395c7b0f2e70ef8e60e543507af60577c7f65ca9b8.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.169007924 +0000 UTC m=+0.045738275 container create 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:41 np0005604790 systemd[1]: Started libpod-conmon-9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854.scope.
Feb  2 04:37:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.154111508 +0000 UTC m=+0.030841849 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.269688436 +0000 UTC m=+0.146418847 container init 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.275759567 +0000 UTC m=+0.152489918 container start 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.279697882 +0000 UTC m=+0.156428233 container attach 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:37:41 np0005604790 hardcore_euler[73731]: AQBlcIBpd9KfEhAAW7lRviKueQ1hLAQRLzgzRw==
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.316234911 +0000 UTC m=+0.192965272 container died 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:41 np0005604790 podman[73714]: 2026-02-02 09:37:41.351187589 +0000 UTC m=+0.227917920 container remove 9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854 (image=quay.io/ceph/ceph:v19, name=hardcore_euler, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:41 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:41 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-conmon-9bb322d47e04e1454bb195ade392f3846238f0dba46698c0893aaa530198e854.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.417209381 +0000 UTC m=+0.045632992 container create dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:41 np0005604790 systemd[1]: Started libpod-conmon-dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05.scope.
Feb  2 04:37:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.401076293 +0000 UTC m=+0.029499974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.661664098 +0000 UTC m=+0.290087739 container init dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.668437448 +0000 UTC m=+0.296861089 container start dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.672620829 +0000 UTC m=+0.301044470 container attach dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:37:41 np0005604790 frosty_dewdney[73767]: AQBlcIBp06CyKBAAmtdqune/il8rbcuAyXhQOg==
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.684891285 +0000 UTC m=+0.313314896 container died dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:41 np0005604790 podman[73751]: 2026-02-02 09:37:41.714374597 +0000 UTC m=+0.342798198 container remove dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05 (image=quay.io/ceph/ceph:v19, name=frosty_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-conmon-dcab7b91797d3c8f421f5a6371320d6d1496780ad41b109157cb8d82150e8a05.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.775379736 +0000 UTC m=+0.045274702 container create 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:41 np0005604790 systemd[1]: Started libpod-conmon-08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998.scope.
Feb  2 04:37:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d6460778a6f00ebf972e0b7e83b012e33ec09417f71ac3758d82cb7407d015/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.751726018 +0000 UTC m=+0.021621044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.850553681 +0000 UTC m=+0.120448647 container init 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.857958938 +0000 UTC m=+0.127853904 container start 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.861335287 +0000 UTC m=+0.131230253 container attach 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:37:41 np0005604790 zen_bohr[73802]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb  2 04:37:41 np0005604790 zen_bohr[73802]: setting min_mon_release = quincy
Feb  2 04:37:41 np0005604790 zen_bohr[73802]: /usr/bin/monmaptool: set fsid to d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:41 np0005604790 zen_bohr[73802]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998.scope: Deactivated successfully.
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.90364713 +0000 UTC m=+0.173542106 container died 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:37:41 np0005604790 podman[73786]: 2026-02-02 09:37:41.944590297 +0000 UTC m=+0.214485263 container remove 08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998 (image=quay.io/ceph/ceph:v19, name=zen_bohr, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 04:37:41 np0005604790 systemd[1]: libpod-conmon-08c5829451c2053589fd6f74112b20274b7de66fd4becd33fce4a91919a46998.scope: Deactivated successfully.
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.024667492 +0000 UTC m=+0.057528097 container create 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:37:42 np0005604790 systemd[1]: Started libpod-conmon-1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a.scope.
Feb  2 04:37:42 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ef81ed81bcc7e1af404ae72200af4b52ba2c506717ddd5cafea1a2a3fb31fd/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ef81ed81bcc7e1af404ae72200af4b52ba2c506717ddd5cafea1a2a3fb31fd/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ef81ed81bcc7e1af404ae72200af4b52ba2c506717ddd5cafea1a2a3fb31fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0ef81ed81bcc7e1af404ae72200af4b52ba2c506717ddd5cafea1a2a3fb31fd/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:41.999570836 +0000 UTC m=+0.032431501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.109623517 +0000 UTC m=+0.142484172 container init 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.123801393 +0000 UTC m=+0.156662008 container start 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.127602594 +0000 UTC m=+0.160463209 container attach 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:37:42 np0005604790 systemd[1]: libpod-1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a.scope: Deactivated successfully.
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.21671711 +0000 UTC m=+0.249577745 container died 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:37:42 np0005604790 podman[73819]: 2026-02-02 09:37:42.257333748 +0000 UTC m=+0.290194333 container remove 1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a (image=quay.io/ceph/ceph:v19, name=admiring_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:37:42 np0005604790 systemd[1]: libpod-conmon-1ebb4289324024307c7e2d09d659bf9f29b57f652a93b775d37131153290a22a.scope: Deactivated successfully.
Feb  2 04:37:42 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:42 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:42 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:42 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:42 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:42 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:42 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:42 np0005604790 systemd[1]: Reached target All Ceph clusters and services.
Feb  2 04:37:42 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:42 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:42 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:42 np0005604790 systemd[1]: Reached target Ceph cluster d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:43 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:43 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:43 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:43 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:43 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:43 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:43 np0005604790 systemd[1]: Created slice Slice /system/ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:43 np0005604790 systemd[1]: Reached target System Time Set.
Feb  2 04:37:43 np0005604790 systemd[1]: Reached target System Time Synchronized.
Feb  2 04:37:43 np0005604790 systemd[1]: Starting Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:37:43 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:43 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:43 np0005604790 podman[74116]: 2026-02-02 09:37:43.713446533 +0000 UTC m=+0.055408751 container create 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d5e237b7f0afa85965959d3e82e156e58ec9cb73d237179bcae516c838cb93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d5e237b7f0afa85965959d3e82e156e58ec9cb73d237179bcae516c838cb93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d5e237b7f0afa85965959d3e82e156e58ec9cb73d237179bcae516c838cb93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d5e237b7f0afa85965959d3e82e156e58ec9cb73d237179bcae516c838cb93/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 podman[74116]: 2026-02-02 09:37:43.688346197 +0000 UTC m=+0.030308425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:43 np0005604790 podman[74116]: 2026-02-02 09:37:43.787629222 +0000 UTC m=+0.129591450 container init 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:43 np0005604790 podman[74116]: 2026-02-02 09:37:43.798564233 +0000 UTC m=+0.140526421 container start 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:37:43 np0005604790 bash[74116]: 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423
Feb  2 04:37:43 np0005604790 systemd[1]: Started Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: pidfile_write: ignore empty --pid-file
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: load: jerasure load: lrc 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: RocksDB version: 7.9.2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Git sha 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Compile date 2025-07-17 03:12:14
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: DB SUMMARY
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: DB Session ID:  GS499JMPP587BFRYVV30
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: CURRENT file:  CURRENT
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                         Options.error_if_exists: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.create_if_missing: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                                     Options.env: 0x558535b90c20
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                                Options.info_log: 0x5585371a4d60
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                              Options.statistics: (nil)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                               Options.use_fsync: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                              Options.db_log_dir: 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                                 Options.wal_dir: 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                    Options.write_buffer_manager: 0x5585371a9900
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.unordered_write: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                               Options.row_cache: None
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                              Options.wal_filter: None
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.two_write_queues: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.wal_compression: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.atomic_flush: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.max_background_jobs: 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.max_background_compactions: -1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.max_subcompactions: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                          Options.max_open_files: -1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Compression algorithms supported:
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kZSTD supported: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kXpressCompression supported: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kZlibCompression supported: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:           Options.merge_operator: 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:        Options.compaction_filter: None
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5585371a4500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5585371c9350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.compression: NoCompression
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.num_levels: 7
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 07840aea-639a-4cd3-a598-1774a042b57b
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025063866152, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025063869961, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "GS499JMPP587BFRYVV30", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025063870090, "job": 1, "event": "recovery_finished"}
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb  2 04:37:43 np0005604790 podman[74136]: 2026-02-02 09:37:43.880150498 +0000 UTC m=+0.047379479 container create 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5585371cae00
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: DB pointer 0x5585372d4000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5585371c9350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@-1(???) e0 preinit fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(probing) e0 win_standalone_election
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T09:37:41.899871+0000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : created 2026-02-02T09:37:41.899871+0000
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Feb  2 04:37:43 np0005604790 systemd[1]: Started libpod-conmon-3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98.scope.
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).mds e1 new map
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-02T09:37:43:907997+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mkfs d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 04:37:43 np0005604790 ceph-mon[74135]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:43 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ec6a843ffccaabb5ce5bcd76dbb1346bee820b2102d330f85287871746ec0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ec6a843ffccaabb5ce5bcd76dbb1346bee820b2102d330f85287871746ec0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9ec6a843ffccaabb5ce5bcd76dbb1346bee820b2102d330f85287871746ec0b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:43 np0005604790 podman[74136]: 2026-02-02 09:37:43.95108351 +0000 UTC m=+0.118312511 container init 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:37:43 np0005604790 podman[74136]: 2026-02-02 09:37:43.859981183 +0000 UTC m=+0.027210194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:43 np0005604790 podman[74136]: 2026-02-02 09:37:43.957224133 +0000 UTC m=+0.124453124 container start 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:37:43 np0005604790 podman[74136]: 2026-02-02 09:37:43.960543431 +0000 UTC m=+0.127772462 container attach 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681230590' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:  cluster:
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    id:     d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    health: HEALTH_OK
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]: 
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:  services:
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    mon: 1 daemons, quorum compute-0 (age 0.224883s)
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    mgr: no daemons active
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    osd: 0 osds: 0 up, 0 in
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]: 
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:  data:
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    pools:   0 pools, 0 pgs
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    objects: 0 objects, 0 B
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    usage:   0 B used, 0 B / 0 B avail
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]:    pgs:     
Feb  2 04:37:44 np0005604790 gracious_ardinghelli[74191]: 
Feb  2 04:37:44 np0005604790 systemd[1]: libpod-3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98.scope: Deactivated successfully.
Feb  2 04:37:44 np0005604790 podman[74136]: 2026-02-02 09:37:44.148695615 +0000 UTC m=+0.315924656 container died 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:44 np0005604790 podman[74136]: 2026-02-02 09:37:44.19294379 +0000 UTC m=+0.360172761 container remove 3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98 (image=quay.io/ceph/ceph:v19, name=gracious_ardinghelli, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:44 np0005604790 systemd[1]: libpod-conmon-3acc87d8f22264148844cb73cd2055c8258ac8aa2d760dda458d55b62cdb6a98.scope: Deactivated successfully.
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.272776338 +0000 UTC m=+0.054858817 container create 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:37:44 np0005604790 systemd[1]: Started libpod-conmon-6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358.scope.
Feb  2 04:37:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557d8c50b2377b246816b92d9e3b609c4c0fc67632e870714a08f97d6da6b55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557d8c50b2377b246816b92d9e3b609c4c0fc67632e870714a08f97d6da6b55/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557d8c50b2377b246816b92d9e3b609c4c0fc67632e870714a08f97d6da6b55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d557d8c50b2377b246816b92d9e3b609c4c0fc67632e870714a08f97d6da6b55/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.34027821 +0000 UTC m=+0.122360729 container init 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.249838519 +0000 UTC m=+0.031921068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.35273327 +0000 UTC m=+0.134815749 container start 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.358575225 +0000 UTC m=+0.140657794 container attach 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3726157337' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3726157337' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 04:37:44 np0005604790 modest_tesla[74247]: 
Feb  2 04:37:44 np0005604790 modest_tesla[74247]: [global]
Feb  2 04:37:44 np0005604790 modest_tesla[74247]: #011fsid = d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:44 np0005604790 modest_tesla[74247]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 04:37:44 np0005604790 systemd[1]: libpod-6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358.scope: Deactivated successfully.
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.545650991 +0000 UTC m=+0.327733460 container died 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:37:44 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d557d8c50b2377b246816b92d9e3b609c4c0fc67632e870714a08f97d6da6b55-merged.mount: Deactivated successfully.
Feb  2 04:37:44 np0005604790 podman[74231]: 2026-02-02 09:37:44.696642758 +0000 UTC m=+0.478725217 container remove 6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358 (image=quay.io/ceph/ceph:v19, name=modest_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:37:44 np0005604790 systemd[1]: libpod-conmon-6fcbd19f0e905a42a922920e837150bdb37c267cff51cd12cbbff6543aa69358.scope: Deactivated successfully.
Feb  2 04:37:44 np0005604790 podman[74285]: 2026-02-02 09:37:44.770107188 +0000 UTC m=+0.059664965 container create c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:37:44 np0005604790 systemd[1]: Started libpod-conmon-c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1.scope.
Feb  2 04:37:44 np0005604790 podman[74285]: 2026-02-02 09:37:44.729310295 +0000 UTC m=+0.018868072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654f7e38ad45bc8c4ebd3854f7596d7481ad2af028b1c7cfcd4291813f7efd6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654f7e38ad45bc8c4ebd3854f7596d7481ad2af028b1c7cfcd4291813f7efd6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654f7e38ad45bc8c4ebd3854f7596d7481ad2af028b1c7cfcd4291813f7efd6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654f7e38ad45bc8c4ebd3854f7596d7481ad2af028b1c7cfcd4291813f7efd6d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:44 np0005604790 podman[74285]: 2026-02-02 09:37:44.853229824 +0000 UTC m=+0.142787611 container init c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:44 np0005604790 podman[74285]: 2026-02-02 09:37:44.859326096 +0000 UTC m=+0.148883863 container start c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 04:37:44 np0005604790 podman[74285]: 2026-02-02 09:37:44.871583211 +0000 UTC m=+0.161140988 container attach c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: from='client.? 192.168.122.100:0/3726157337' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:37:44 np0005604790 ceph-mon[74135]: from='client.? 192.168.122.100:0/3726157337' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655218697' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:37:45 np0005604790 systemd[1]: libpod-c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1.scope: Deactivated successfully.
Feb  2 04:37:45 np0005604790 podman[74285]: 2026-02-02 09:37:45.02716867 +0000 UTC m=+0.316726447 container died c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-654f7e38ad45bc8c4ebd3854f7596d7481ad2af028b1c7cfcd4291813f7efd6d-merged.mount: Deactivated successfully.
Feb  2 04:37:45 np0005604790 podman[74285]: 2026-02-02 09:37:45.071589119 +0000 UTC m=+0.361146906 container remove c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1 (image=quay.io/ceph/ceph:v19, name=serene_lederberg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:45 np0005604790 systemd[1]: libpod-conmon-c0ac9b7315d50aba8f3579c89cb3675c4a101031b7e9adee5d3f99dfa50ac6a1.scope: Deactivated successfully.
Feb  2 04:37:45 np0005604790 systemd[1]: Stopping Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: mon.compute-0@0(leader) e1 shutdown
Feb  2 04:37:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0[74131]: 2026-02-02T09:37:45.226+0000 7f398bba5640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb  2 04:37:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0[74131]: 2026-02-02T09:37:45.226+0000 7f398bba5640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 04:37:45 np0005604790 ceph-mon[74135]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 04:37:45 np0005604790 podman[74368]: 2026-02-02 09:37:45.267662952 +0000 UTC m=+0.066821983 container died 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 04:37:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-86d5e237b7f0afa85965959d3e82e156e58ec9cb73d237179bcae516c838cb93-merged.mount: Deactivated successfully.
Feb  2 04:37:45 np0005604790 podman[74368]: 2026-02-02 09:37:45.320929796 +0000 UTC m=+0.120088857 container remove 1b86f61310f80c0b5be8d7d437d4ac04dbb6165d783e50047df7370d2392a423 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:45 np0005604790 bash[74368]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0
Feb  2 04:37:45 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@mon.compute-0.service: Deactivated successfully.
Feb  2 04:37:45 np0005604790 systemd[1]: Stopped Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:45 np0005604790 systemd[1]: Starting Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:37:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb  2 04:37:45 np0005604790 podman[74470]: 2026-02-02 09:37:45.641811822 +0000 UTC m=+0.046914046 container create 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf557a00cfcc553ab122155163a30ef37fd88f27c87764e83297c93374e6f2f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf557a00cfcc553ab122155163a30ef37fd88f27c87764e83297c93374e6f2f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf557a00cfcc553ab122155163a30ef37fd88f27c87764e83297c93374e6f2f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf557a00cfcc553ab122155163a30ef37fd88f27c87764e83297c93374e6f2f2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 podman[74470]: 2026-02-02 09:37:45.712665283 +0000 UTC m=+0.117767517 container init 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:37:45 np0005604790 podman[74470]: 2026-02-02 09:37:45.622362896 +0000 UTC m=+0.027465090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:45 np0005604790 podman[74470]: 2026-02-02 09:37:45.721840756 +0000 UTC m=+0.126942950 container start 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:37:45 np0005604790 bash[74470]: 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783
Feb  2 04:37:45 np0005604790 systemd[1]: Started Ceph mon.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: pidfile_write: ignore empty --pid-file
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: load: jerasure load: lrc 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: RocksDB version: 7.9.2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Git sha 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Compile date 2025-07-17 03:12:14
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: DB SUMMARY
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: DB Session ID:  W2PO4QU95YGVZQBG6TZ2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: CURRENT file:  CURRENT
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58739 ; 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                         Options.error_if_exists: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.create_if_missing: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                                     Options.env: 0x5630b8214c20
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                                      Options.fs: PosixFileSystem
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                                Options.info_log: 0x5630b94c1ac0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                              Options.statistics: (nil)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                               Options.use_fsync: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                              Options.db_log_dir: 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                                 Options.wal_dir: 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                    Options.write_buffer_manager: 0x5630b94c5900
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.unordered_write: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                               Options.row_cache: None
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                              Options.wal_filter: None
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.two_write_queues: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.wal_compression: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.atomic_flush: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.max_background_jobs: 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.max_background_compactions: -1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.max_subcompactions: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.max_total_wal_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                          Options.max_open_files: -1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:       Options.compaction_readahead_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Compression algorithms supported:
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kZSTD supported: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kXpressCompression supported: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kZlibCompression supported: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:           Options.merge_operator: 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:        Options.compaction_filter: None
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630b94c0aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630b94e5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:        Options.write_buffer_size: 33554432
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:  Options.max_write_buffer_number: 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.compression: NoCompression
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.num_levels: 7
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 07840aea-639a-4cd3-a598-1774a042b57b
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025065760863, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025065766849, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56964, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54481, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025065, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025065766920, "job": 1, "event": "recovery_finished"}
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5630b94e6e00
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: DB pointer 0x5630b95f0000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630b94e5350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???) e1 preinit fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).mds e1 new map
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-02T09:37:43:907997+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e1 win_standalone_election
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T09:37:41.899871+0000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : created 2026-02-02T09:37:41.899871+0000
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb  2 04:37:45 np0005604790 podman[74490]: 2026-02-02 09:37:45.809988406 +0000 UTC m=+0.054397035 container create e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 04:37:45 np0005604790 systemd[1]: Started libpod-conmon-e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa.scope.
Feb  2 04:37:45 np0005604790 ceph-mon[74489]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb  2 04:37:45 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a605e38cede6dc2c8b91017fadf27d86d1ff25d9b3c35160708876ad8ff708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a605e38cede6dc2c8b91017fadf27d86d1ff25d9b3c35160708876ad8ff708/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a605e38cede6dc2c8b91017fadf27d86d1ff25d9b3c35160708876ad8ff708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:45 np0005604790 podman[74490]: 2026-02-02 09:37:45.791525856 +0000 UTC m=+0.035934545 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:45 np0005604790 podman[74490]: 2026-02-02 09:37:45.894049927 +0000 UTC m=+0.138458646 container init e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:45 np0005604790 podman[74490]: 2026-02-02 09:37:45.900866368 +0000 UTC m=+0.145274997 container start e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:37:45 np0005604790 podman[74490]: 2026-02-02 09:37:45.912136787 +0000 UTC m=+0.156545506 container attach e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:37:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb  2 04:37:46 np0005604790 systemd[1]: libpod-e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa.scope: Deactivated successfully.
Feb  2 04:37:46 np0005604790 podman[74490]: 2026-02-02 09:37:46.083851094 +0000 UTC m=+0.328259763 container died e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:46 np0005604790 podman[74490]: 2026-02-02 09:37:46.138910005 +0000 UTC m=+0.383318674 container remove e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa (image=quay.io/ceph/ceph:v19, name=objective_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 04:37:46 np0005604790 systemd[1]: libpod-conmon-e6b751cae9c0f8e29b4d89dea4218f9db81f44870d0758c38276dee0e4d8bcaa.scope: Deactivated successfully.
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.193987657 +0000 UTC m=+0.038608306 container create 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 04:37:46 np0005604790 systemd[1]: Started libpod-conmon-245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32.scope.
Feb  2 04:37:46 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6295ffd46a58eb52b6afdee04611152f6db97f7936196a7fee9aca82c4799/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6295ffd46a58eb52b6afdee04611152f6db97f7936196a7fee9aca82c4799/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5df6295ffd46a58eb52b6afdee04611152f6db97f7936196a7fee9aca82c4799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.26457282 +0000 UTC m=+0.109193569 container init 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.178129536 +0000 UTC m=+0.022750215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.313591659 +0000 UTC m=+0.158212318 container start 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.320340408 +0000 UTC m=+0.164961107 container attach 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:37:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb  2 04:37:46 np0005604790 systemd[1]: libpod-245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32.scope: Deactivated successfully.
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.519063855 +0000 UTC m=+0.363684544 container died 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:37:46 np0005604790 podman[74583]: 2026-02-02 09:37:46.574244119 +0000 UTC m=+0.418864778 container remove 245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32 (image=quay.io/ceph/ceph:v19, name=loving_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:37:46 np0005604790 systemd[1]: libpod-conmon-245b61697302e9ddc66b854758e407900b805bd9cf51d80f9da83edb9ad65d32.scope: Deactivated successfully.
Feb  2 04:37:46 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:46 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:46 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:46 np0005604790 systemd[1]: Reloading.
Feb  2 04:37:46 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:37:46 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:37:47 np0005604790 systemd[1]: Starting Ceph mgr.compute-0.djvyfo for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:37:47 np0005604790 podman[74765]: 2026-02-02 09:37:47.398692011 +0000 UTC m=+0.059330126 container create 3dfd19b9ab30bf136f4a18ad3b4a13ee303004a583ad880116709be18eec72dc (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f37ceb4f74980d688bddcb1c28aa48f1c53d269dd02fed6509615520c371f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f37ceb4f74980d688bddcb1c28aa48f1c53d269dd02fed6509615520c371f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f37ceb4f74980d688bddcb1c28aa48f1c53d269dd02fed6509615520c371f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f37ceb4f74980d688bddcb1c28aa48f1c53d269dd02fed6509615520c371f9/merged/var/lib/ceph/mgr/ceph-compute-0.djvyfo supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 podman[74765]: 2026-02-02 09:37:47.369475215 +0000 UTC m=+0.030113390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:47 np0005604790 podman[74765]: 2026-02-02 09:37:47.476311891 +0000 UTC m=+0.136950056 container init 3dfd19b9ab30bf136f4a18ad3b4a13ee303004a583ad880116709be18eec72dc (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:37:47 np0005604790 podman[74765]: 2026-02-02 09:37:47.48495703 +0000 UTC m=+0.145595135 container start 3dfd19b9ab30bf136f4a18ad3b4a13ee303004a583ad880116709be18eec72dc (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:47 np0005604790 bash[74765]: 3dfd19b9ab30bf136f4a18ad3b4a13ee303004a583ad880116709be18eec72dc
Feb  2 04:37:47 np0005604790 systemd[1]: Started Ceph mgr.compute-0.djvyfo for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.638403703 +0000 UTC m=+0.086792395 container create 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:47.643+0000 7fba6860c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.591807356 +0000 UTC m=+0.040196098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:47 np0005604790 systemd[1]: Started libpod-conmon-6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6.scope.
Feb  2 04:37:47 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0ada678cedf4eba2bef8498b62a1e9afd4395cbd1ecded6c2350b240bd124a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0ada678cedf4eba2bef8498b62a1e9afd4395cbd1ecded6c2350b240bd124a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d0ada678cedf4eba2bef8498b62a1e9afd4395cbd1ecded6c2350b240bd124a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:47.733+0000 7fba6860c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:37:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.741297163 +0000 UTC m=+0.189685905 container init 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.751919325 +0000 UTC m=+0.200307987 container start 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.765533157 +0000 UTC m=+0.213921899 container attach 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 04:37:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229875907' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 04:37:47 np0005604790 confident_noyce[74823]: 
Feb  2 04:37:47 np0005604790 confident_noyce[74823]: {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "health": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "status": "HEALTH_OK",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "checks": {},
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "mutes": []
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "election_epoch": 5,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "quorum": [
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        0
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    ],
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "quorum_names": [
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "compute-0"
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    ],
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "quorum_age": 2,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "monmap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "epoch": 1,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "min_mon_release_name": "squid",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_mons": 1
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "osdmap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "epoch": 1,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_osds": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_up_osds": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "osd_up_since": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_in_osds": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "osd_in_since": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_remapped_pgs": 0
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "pgmap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "pgs_by_state": [],
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_pgs": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_pools": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_objects": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "data_bytes": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "bytes_used": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "bytes_avail": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "bytes_total": 0
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "fsmap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "epoch": 1,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "btime": "2026-02-02T09:37:43:907997+0000",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "by_rank": [],
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "up:standby": 0
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "mgrmap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "available": false,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "num_standbys": 0,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "modules": [
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:            "iostat",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:            "nfs",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:            "restful"
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        ],
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "services": {}
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "servicemap": {
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "epoch": 1,
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "modified": "2026-02-02T09:37:43.909949+0000",
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:        "services": {}
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    },
Feb  2 04:37:47 np0005604790 confident_noyce[74823]:    "progress_events": {}
Feb  2 04:37:47 np0005604790 confident_noyce[74823]: }
Feb  2 04:37:47 np0005604790 systemd[1]: libpod-6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6.scope: Deactivated successfully.
Feb  2 04:37:47 np0005604790 podman[74806]: 2026-02-02 09:37:47.974918224 +0000 UTC m=+0.423306896 container died 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:37:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3d0ada678cedf4eba2bef8498b62a1e9afd4395cbd1ecded6c2350b240bd124a-merged.mount: Deactivated successfully.
Feb  2 04:37:48 np0005604790 podman[74806]: 2026-02-02 09:37:48.030187951 +0000 UTC m=+0.478576623 container remove 6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6 (image=quay.io/ceph/ceph:v19, name=confident_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:37:48 np0005604790 systemd[1]: libpod-conmon-6267dfa15004cc9163411bfd44926da93c0572a496efe800823cef2d928613b6.scope: Deactivated successfully.
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:37:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:48.427+0000 7fba6860c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:37:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:48.957+0000 7fba6860c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:37:48 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:49.093+0000 7fba6860c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:49.152+0000 7fba6860c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:37:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:49.271+0000 7fba6860c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:37:49 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.09608908 +0000 UTC m=+0.048580011 container create 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.102+0000 7fba6860c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:37:50 np0005604790 systemd[1]: Started libpod-conmon-2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250.scope.
Feb  2 04:37:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.064222944 +0000 UTC m=+0.016713905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc53a67c20a8ea138e8b9c72a330c495ae9f8433fd40943337bdf4897c64813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc53a67c20a8ea138e8b9c72a330c495ae9f8433fd40943337bdf4897c64813/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc53a67c20a8ea138e8b9c72a330c495ae9f8433fd40943337bdf4897c64813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.188379409 +0000 UTC m=+0.140870360 container init 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.194839641 +0000 UTC m=+0.147330552 container start 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.203438249 +0000 UTC m=+0.155929200 container attach 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.291+0000 7fba6860c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.361+0000 7fba6860c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:37:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 04:37:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691876241' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]: 
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]: {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "health": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "status": "HEALTH_OK",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "checks": {},
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "mutes": []
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "election_epoch": 5,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "quorum": [
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        0
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    ],
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "quorum_names": [
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "compute-0"
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    ],
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "quorum_age": 4,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "monmap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "epoch": 1,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "min_mon_release_name": "squid",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_mons": 1
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "osdmap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "epoch": 1,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_osds": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_up_osds": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "osd_up_since": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_in_osds": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "osd_in_since": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_remapped_pgs": 0
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "pgmap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "pgs_by_state": [],
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_pgs": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_pools": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_objects": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "data_bytes": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "bytes_used": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "bytes_avail": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "bytes_total": 0
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "fsmap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "epoch": 1,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "btime": "2026-02-02T09:37:43:907997+0000",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "by_rank": [],
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "up:standby": 0
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "mgrmap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "available": false,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "num_standbys": 0,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "modules": [
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:            "iostat",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:            "nfs",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:            "restful"
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        ],
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "services": {}
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "servicemap": {
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "epoch": 1,
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "modified": "2026-02-02T09:37:43.909949+0000",
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:        "services": {}
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    },
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]:    "progress_events": {}
Feb  2 04:37:50 np0005604790 pedantic_sinoussi[74890]: }
Feb  2 04:37:50 np0005604790 systemd[1]: libpod-2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250.scope: Deactivated successfully.
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.405119502 +0000 UTC m=+0.357610413 container died 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.422+0000 7fba6860c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:37:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0fc53a67c20a8ea138e8b9c72a330c495ae9f8433fd40943337bdf4897c64813-merged.mount: Deactivated successfully.
Feb  2 04:37:50 np0005604790 podman[74874]: 2026-02-02 09:37:50.444939028 +0000 UTC m=+0.397429939 container remove 2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250 (image=quay.io/ceph/ceph:v19, name=pedantic_sinoussi, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:50 np0005604790 systemd[1]: libpod-conmon-2842f8ca436b178b665717432c4626367570c1c938ef005cf6aa578c3eb11250.scope: Deactivated successfully.
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.492+0000 7fba6860c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.562+0000 7fba6860c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.863+0000 7fba6860c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:37:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:50.962+0000 7fba6860c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:37:50 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:37:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:51.378+0000 7fba6860c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:37:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:51.869+0000 7fba6860c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:37:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:51.936+0000 7fba6860c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:37:51 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.015+0000 7fba6860c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.156+0000 7fba6860c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.232+0000 7fba6860c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.371+0000 7fba6860c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.520990837 +0000 UTC m=+0.050199254 container create 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:37:52 np0005604790 systemd[1]: Started libpod-conmon-3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f.scope.
Feb  2 04:37:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48505e38a2327a523d6dc809e9f2cb6fc89d395c43a6c2030e014e7205af9d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48505e38a2327a523d6dc809e9f2cb6fc89d395c43a6c2030e014e7205af9d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c48505e38a2327a523d6dc809e9f2cb6fc89d395c43a6c2030e014e7205af9d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.592743271 +0000 UTC m=+0.121951728 container init 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.598493654 +0000 UTC m=+0.127702041 container start 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.503983335 +0000 UTC m=+0.033191732 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.599+0000 7fba6860c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.602642604 +0000 UTC m=+0.131851031 container attach 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32600167' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 04:37:52 np0005604790 sad_curran[74946]: 
Feb  2 04:37:52 np0005604790 sad_curran[74946]: {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "health": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "status": "HEALTH_OK",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "checks": {},
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "mutes": []
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "election_epoch": 5,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "quorum": [
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        0
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    ],
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "quorum_names": [
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "compute-0"
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    ],
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "quorum_age": 7,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "monmap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "epoch": 1,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "min_mon_release_name": "squid",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_mons": 1
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "osdmap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "epoch": 1,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_osds": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_up_osds": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "osd_up_since": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_in_osds": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "osd_in_since": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_remapped_pgs": 0
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "pgmap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "pgs_by_state": [],
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_pgs": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_pools": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_objects": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "data_bytes": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "bytes_used": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "bytes_avail": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "bytes_total": 0
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "fsmap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "epoch": 1,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "btime": "2026-02-02T09:37:43:907997+0000",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "by_rank": [],
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "up:standby": 0
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "mgrmap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "available": false,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "num_standbys": 0,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "modules": [
Feb  2 04:37:52 np0005604790 sad_curran[74946]:            "iostat",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:            "nfs",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:            "restful"
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        ],
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "services": {}
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "servicemap": {
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "epoch": 1,
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "modified": "2026-02-02T09:37:43.909949+0000",
Feb  2 04:37:52 np0005604790 sad_curran[74946]:        "services": {}
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    },
Feb  2 04:37:52 np0005604790 sad_curran[74946]:    "progress_events": {}
Feb  2 04:37:52 np0005604790 sad_curran[74946]: }
Feb  2 04:37:52 np0005604790 systemd[1]: libpod-3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f.scope: Deactivated successfully.
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.82819584 +0000 UTC m=+0.357404207 container died 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:37:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c48505e38a2327a523d6dc809e9f2cb6fc89d395c43a6c2030e014e7205af9d9-merged.mount: Deactivated successfully.
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.853+0000 7fba6860c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:37:52 np0005604790 podman[74930]: 2026-02-02 09:37:52.86171186 +0000 UTC m=+0.390920247 container remove 3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f (image=quay.io/ceph/ceph:v19, name=sad_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:37:52 np0005604790 systemd[1]: libpod-conmon-3828ab6f096e94a1a63c359b5f6b2a3a37c5e296751a7924e23013dd8c40297f.scope: Deactivated successfully.
Feb  2 04:37:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:52.923+0000 7fba6860c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x55a7d063c9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.djvyfo(active, starting, since 0.010373s)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map Activating!
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map I am now activating
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: balancer
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: crash
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer INFO root] Starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: devicehealth
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:37:52
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Manager daemon compute-0.djvyfo is now available
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [balancer INFO root] No pools available
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: iostat
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: nfs
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: orchestrator
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: pg_autoscaler
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: progress
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [progress INFO root] Loading...
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [progress INFO root] No stored events to load
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded [] historic events
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] recovery thread starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] starting setup
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: rbd_support
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: restful
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [restful INFO root] server_addr: :: server_port: 8003
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [restful WARNING root] server not running: no certificate configured
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: status
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: telemetry
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] PerfHandler: starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TaskHandler: starting
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"} v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] setup complete
Feb  2 04:37:52 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: volumes
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb  2 04:37:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: Manager daemon compute-0.djvyfo is now available
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: from='mgr.14102 192.168.122.100:0/1185132920' entity='mgr.compute-0.djvyfo' 
Feb  2 04:37:53 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.djvyfo(active, since 1.02518s)
Feb  2 04:37:54 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:37:54 np0005604790 podman[75064]: 2026-02-02 09:37:54.946669755 +0000 UTC m=+0.057602600 container create a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:37:54 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.djvyfo(active, since 2s)
Feb  2 04:37:54 np0005604790 systemd[1]: Started libpod-conmon-a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe.scope.
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:54.911264885 +0000 UTC m=+0.022197760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:55 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f9a4d21bbc0c6fb39424aee2d815fb29b0810bcd309b65365d644be1437499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f9a4d21bbc0c6fb39424aee2d815fb29b0810bcd309b65365d644be1437499/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f9a4d21bbc0c6fb39424aee2d815fb29b0810bcd309b65365d644be1437499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:55.023239127 +0000 UTC m=+0.134171992 container init a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:55.027990433 +0000 UTC m=+0.138923258 container start a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:55.038346238 +0000 UTC m=+0.149279093 container attach a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 04:37:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770039458' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]: 
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]: {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "health": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "status": "HEALTH_OK",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "checks": {},
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "mutes": []
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "election_epoch": 5,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "quorum": [
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        0
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    ],
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "quorum_names": [
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "compute-0"
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    ],
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "quorum_age": 9,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "monmap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "epoch": 1,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "min_mon_release_name": "squid",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_mons": 1
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "osdmap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "epoch": 1,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_osds": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_up_osds": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "osd_up_since": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_in_osds": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "osd_in_since": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_remapped_pgs": 0
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "pgmap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "pgs_by_state": [],
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_pgs": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_pools": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_objects": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "data_bytes": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "bytes_used": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "bytes_avail": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "bytes_total": 0
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "fsmap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "epoch": 1,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "btime": "2026-02-02T09:37:43:907997+0000",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "by_rank": [],
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "up:standby": 0
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "mgrmap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "available": true,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "num_standbys": 0,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "modules": [
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:            "iostat",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:            "nfs",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:            "restful"
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        ],
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "services": {}
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "servicemap": {
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "epoch": 1,
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "modified": "2026-02-02T09:37:43.909949+0000",
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:        "services": {}
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    },
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]:    "progress_events": {}
Feb  2 04:37:55 np0005604790 serene_sanderson[75081]: }
Feb  2 04:37:55 np0005604790 systemd[1]: libpod-a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe.scope: Deactivated successfully.
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:55.447142958 +0000 UTC m=+0.558075773 container died a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 04:37:55 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e2f9a4d21bbc0c6fb39424aee2d815fb29b0810bcd309b65365d644be1437499-merged.mount: Deactivated successfully.
Feb  2 04:37:55 np0005604790 podman[75064]: 2026-02-02 09:37:55.497152175 +0000 UTC m=+0.608084990 container remove a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe (image=quay.io/ceph/ceph:v19, name=serene_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:37:55 np0005604790 systemd[1]: libpod-conmon-a1884819af1750be178c94b8074ade763bb68898bca83944842e893b4d40b7fe.scope: Deactivated successfully.
Feb  2 04:37:55 np0005604790 podman[75116]: 2026-02-02 09:37:55.56101611 +0000 UTC m=+0.048844957 container create 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:37:55 np0005604790 systemd[1]: Started libpod-conmon-16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525.scope.
Feb  2 04:37:55 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3654eecc1e79d15f035e6265edc9160cc2136d4edb3141ee7839cf418de354bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3654eecc1e79d15f035e6265edc9160cc2136d4edb3141ee7839cf418de354bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3654eecc1e79d15f035e6265edc9160cc2136d4edb3141ee7839cf418de354bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3654eecc1e79d15f035e6265edc9160cc2136d4edb3141ee7839cf418de354bb/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:55 np0005604790 podman[75116]: 2026-02-02 09:37:55.531551958 +0000 UTC m=+0.019380835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:55 np0005604790 podman[75116]: 2026-02-02 09:37:55.648506712 +0000 UTC m=+0.136335609 container init 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:37:55 np0005604790 podman[75116]: 2026-02-02 09:37:55.657829849 +0000 UTC m=+0.145658736 container start 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:37:55 np0005604790 podman[75116]: 2026-02-02 09:37:55.668590375 +0000 UTC m=+0.156419232 container attach 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Feb  2 04:37:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 04:37:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2009427800' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:37:56 np0005604790 reverent_moser[75132]: 
Feb  2 04:37:56 np0005604790 reverent_moser[75132]: [global]
Feb  2 04:37:56 np0005604790 reverent_moser[75132]: #011fsid = d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:37:56 np0005604790 reverent_moser[75132]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb  2 04:37:56 np0005604790 systemd[1]: libpod-16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525.scope: Deactivated successfully.
Feb  2 04:37:56 np0005604790 podman[75116]: 2026-02-02 09:37:56.022705083 +0000 UTC m=+0.510533920 container died 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:37:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3654eecc1e79d15f035e6265edc9160cc2136d4edb3141ee7839cf418de354bb-merged.mount: Deactivated successfully.
Feb  2 04:37:56 np0005604790 podman[75116]: 2026-02-02 09:37:56.057962048 +0000 UTC m=+0.545790895 container remove 16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525 (image=quay.io/ceph/ceph:v19, name=reverent_moser, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:56 np0005604790 systemd[1]: libpod-conmon-16ecbf2f559da6b3841601eb8a31e754ed3c60bbfa62acd82a13065b19257525.scope: Deactivated successfully.
Feb  2 04:37:56 np0005604790 podman[75170]: 2026-02-02 09:37:56.108797308 +0000 UTC m=+0.036780498 container create aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:37:56 np0005604790 systemd[1]: Started libpod-conmon-aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0.scope.
Feb  2 04:37:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff12652c471acc184ee29bdbb3d583215ab973a05b183aaaccfcde29676f243d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff12652c471acc184ee29bdbb3d583215ab973a05b183aaaccfcde29676f243d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff12652c471acc184ee29bdbb3d583215ab973a05b183aaaccfcde29676f243d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:56 np0005604790 podman[75170]: 2026-02-02 09:37:56.183236243 +0000 UTC m=+0.111219513 container init aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:37:56 np0005604790 podman[75170]: 2026-02-02 09:37:56.092974338 +0000 UTC m=+0.020957548 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:56 np0005604790 podman[75170]: 2026-02-02 09:37:56.191003839 +0000 UTC m=+0.118987069 container start aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:37:56 np0005604790 podman[75170]: 2026-02-02 09:37:56.194959374 +0000 UTC m=+0.122942594 container attach aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb  2 04:37:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/696662589' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Feb  2 04:37:56 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:37:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/696662589' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  1: '-n'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  2: 'mgr.compute-0.djvyfo'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  3: '-f'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  4: '--setuser'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  5: 'ceph'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  6: '--setgroup'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  7: 'ceph'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  8: '--default-log-to-file=false'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  9: '--default-log-to-journald=true'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr respawn  exe_path /proc/self/exe
Feb  2 04:37:57 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.djvyfo(active, since 4s)
Feb  2 04:37:57 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2009427800' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:37:57 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/696662589' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Feb  2 04:37:57 np0005604790 systemd[1]: libpod-aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0.scope: Deactivated successfully.
Feb  2 04:37:57 np0005604790 conmon[75186]: conmon aa5e985099d9dc9f747a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0.scope/container/memory.events
Feb  2 04:37:57 np0005604790 podman[75170]: 2026-02-02 09:37:57.016714984 +0000 UTC m=+0.944698174 container died aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:37:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setuser ceph since I am not root
Feb  2 04:37:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setgroup ceph since I am not root
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:37:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ff12652c471acc184ee29bdbb3d583215ab973a05b183aaaccfcde29676f243d-merged.mount: Deactivated successfully.
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:37:57 np0005604790 podman[75170]: 2026-02-02 09:37:57.081215266 +0000 UTC m=+1.009198456 container remove aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0 (image=quay.io/ceph/ceph:v19, name=heuristic_vaughan, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:37:57 np0005604790 systemd[1]: libpod-conmon-aa5e985099d9dc9f747a3574f63345e008fdf1c9085abfd8abb71026e23177d0.scope: Deactivated successfully.
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.143961071 +0000 UTC m=+0.051949650 container create dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:37:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:57.168+0000 7f0990053140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:37:57 np0005604790 systemd[1]: Started libpod-conmon-dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1.scope.
Feb  2 04:37:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.111867329 +0000 UTC m=+0.019855888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f6241bcc9df5b34b37eb467c4e1a84d1632d4fa7d5cea40da12b3999f233db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f6241bcc9df5b34b37eb467c4e1a84d1632d4fa7d5cea40da12b3999f233db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f6241bcc9df5b34b37eb467c4e1a84d1632d4fa7d5cea40da12b3999f233db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.229900292 +0000 UTC m=+0.137888901 container init dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.23395297 +0000 UTC m=+0.141941549 container start dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.237650528 +0000 UTC m=+0.145639177 container attach dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:37:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:57.243+0000 7f0990053140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:37:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 04:37:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605892912' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]: {
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]:    "epoch": 5,
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]:    "available": true,
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]:    "active_name": "compute-0.djvyfo",
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]:    "num_standby": 0
Feb  2 04:37:57 np0005604790 youthful_hamilton[75259]: }
Feb  2 04:37:57 np0005604790 systemd[1]: libpod-dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1.scope: Deactivated successfully.
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.636099683 +0000 UTC m=+0.544088242 container died dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 04:37:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b5f6241bcc9df5b34b37eb467c4e1a84d1632d4fa7d5cea40da12b3999f233db-merged.mount: Deactivated successfully.
Feb  2 04:37:57 np0005604790 podman[75243]: 2026-02-02 09:37:57.674748169 +0000 UTC m=+0.582736718 container remove dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1 (image=quay.io/ceph/ceph:v19, name=youthful_hamilton, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:37:57 np0005604790 systemd[1]: libpod-conmon-dfc30da7d48ec67e382451dc6e87911c21143712bf5a28080290b08f44a851d1.scope: Deactivated successfully.
Feb  2 04:37:57 np0005604790 podman[75309]: 2026-02-02 09:37:57.722524917 +0000 UTC m=+0.035985516 container create a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:57 np0005604790 systemd[1]: Started libpod-conmon-a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7.scope.
Feb  2 04:37:57 np0005604790 podman[75309]: 2026-02-02 09:37:57.705516145 +0000 UTC m=+0.018976744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:37:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d97130ed986527124a5cab889f164411d6ca51b78faae4d738569a4688a6e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d97130ed986527124a5cab889f164411d6ca51b78faae4d738569a4688a6e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d97130ed986527124a5cab889f164411d6ca51b78faae4d738569a4688a6e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:37:57 np0005604790 podman[75309]: 2026-02-02 09:37:57.826004043 +0000 UTC m=+0.139464632 container init a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:37:57 np0005604790 podman[75309]: 2026-02-02 09:37:57.832949697 +0000 UTC m=+0.146410316 container start a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:37:57 np0005604790 podman[75309]: 2026-02-02 09:37:57.865307706 +0000 UTC m=+0.178768305 container attach a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:37:57 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:37:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:57.918+0000 7f0990053140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/696662589' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:58.440+0000 7f0990053140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:58.576+0000 7f0990053140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:58.635+0000 7f0990053140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:37:58 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:37:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:58.752+0000 7f0990053140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:37:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:59.596+0000 7f0990053140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:37:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:59.792+0000 7f0990053140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:37:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:59.857+0000 7f0990053140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:37:59 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:37:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:37:59.922+0000 7f0990053140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:38:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:00.015+0000 7f0990053140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:38:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:00.090+0000 7f0990053140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:38:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:00.401+0000 7f0990053140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:38:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:00.485+0000 7f0990053140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:38:00 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:38:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:00.904+0000 7f0990053140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.403+0000 7f0990053140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.479+0000 7f0990053140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.549+0000 7f0990053140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.682+0000 7f0990053140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.743+0000 7f0990053140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:38:01 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:38:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:01.885+0000 7f0990053140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:38:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:02.118+0000 7f0990053140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:38:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:02.354+0000 7f0990053140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:02.419+0000 7f0990053140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Active manager daemon compute-0.djvyfo restarted
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x56024f538d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map Activating!
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map I am now activating
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.djvyfo(active, starting, since 0.0275283s)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: balancer
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Starting
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Manager daemon compute-0.djvyfo is now available
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:38:02
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] No pools available
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: Active manager daemon compute-0.djvyfo restarted
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: Manager daemon compute-0.djvyfo is now available
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: cephadm
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: crash
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: devicehealth
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Starting
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: iostat
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: nfs
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: orchestrator
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: pg_autoscaler
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: progress
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [progress INFO root] Loading...
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [progress INFO root] No stored events to load
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded [] historic events
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] recovery thread starting
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] starting setup
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: rbd_support
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: restful
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [restful INFO root] server_addr: :: server_port: 8003
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [restful WARNING root] server not running: no certificate configured
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: status
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: telemetry
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] PerfHandler: starting
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TaskHandler: starting
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"} v 0)
Feb  2 04:38:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] setup complete
Feb  2 04:38:02 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: volumes
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.djvyfo(active, since 1.04657s)
Feb  2 04:38:03 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb  2 04:38:03 np0005604790 nervous_leakey[75325]: {
Feb  2 04:38:03 np0005604790 nervous_leakey[75325]:    "mgrmap_epoch": 7,
Feb  2 04:38:03 np0005604790 nervous_leakey[75325]:    "initialized": true
Feb  2 04:38:03 np0005604790 nervous_leakey[75325]: }
Feb  2 04:38:03 np0005604790 systemd[1]: libpod-a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7.scope: Deactivated successfully.
Feb  2 04:38:03 np0005604790 podman[75309]: 2026-02-02 09:38:03.510458889 +0000 UTC m=+5.823919528 container died a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: Found migration_current of "None". Setting to last migration.
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-74d97130ed986527124a5cab889f164411d6ca51b78faae4d738569a4688a6e3-merged.mount: Deactivated successfully.
Feb  2 04:38:03 np0005604790 podman[75309]: 2026-02-02 09:38:03.576755609 +0000 UTC m=+5.890216238 container remove a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7 (image=quay.io/ceph/ceph:v19, name=nervous_leakey, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:38:03 np0005604790 systemd[1]: libpod-conmon-a51d1aff54988de0eee374932ab748385d847d304d1a4cac4c7a2cabde01ace7.scope: Deactivated successfully.
Feb  2 04:38:03 np0005604790 podman[75473]: 2026-02-02 09:38:03.650311841 +0000 UTC m=+0.051569750 container create 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 04:38:03 np0005604790 systemd[1]: Started libpod-conmon-2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05.scope.
Feb  2 04:38:03 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfcb8d79817a3bc86b1801da492260a9cb4e40925e259c8a2afbe003f43e236/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfcb8d79817a3bc86b1801da492260a9cb4e40925e259c8a2afbe003f43e236/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abfcb8d79817a3bc86b1801da492260a9cb4e40925e259c8a2afbe003f43e236/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:03 np0005604790 podman[75473]: 2026-02-02 09:38:03.63407373 +0000 UTC m=+0.035331649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:03 np0005604790 podman[75473]: 2026-02-02 09:38:03.747008607 +0000 UTC m=+0.148266566 container init 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:03 np0005604790 podman[75473]: 2026-02-02 09:38:03.752128963 +0000 UTC m=+0.153386872 container start 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:38:03 np0005604790 podman[75473]: 2026-02-02 09:38:03.75577623 +0000 UTC m=+0.157034179 container attach 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:38:04 np0005604790 systemd[1]: libpod-2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05.scope: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75473]: 2026-02-02 09:38:04.184391525 +0000 UTC m=+0.585649454 container died 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:38:04] ENGINE Bus STARTING
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:38:04] ENGINE Bus STARTING
Feb  2 04:38:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-abfcb8d79817a3bc86b1801da492260a9cb4e40925e259c8a2afbe003f43e236-merged.mount: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75473]: 2026-02-02 09:38:04.234047783 +0000 UTC m=+0.635305682 container remove 2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05 (image=quay.io/ceph/ceph:v19, name=infallible_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:38:04 np0005604790 systemd[1]: libpod-conmon-2038fce464b8b54d462469673139282ed281ba4bbcc4cca0925bf2b74134bb05.scope: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.308215712 +0000 UTC m=+0.055429192 container create a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:38:04] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:38:04] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:38:04] ENGINE Client ('192.168.122.100', 58502) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:38:04] ENGINE Client ('192.168.122.100', 58502) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:38:04 np0005604790 systemd[1]: Started libpod-conmon-a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78.scope.
Feb  2 04:38:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb59af48312e54f8cea682aaff71135061e694613003090ddf88816231e2529/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb59af48312e54f8cea682aaff71135061e694613003090ddf88816231e2529/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccb59af48312e54f8cea682aaff71135061e694613003090ddf88816231e2529/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.283459635 +0000 UTC m=+0.030673175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.403771388 +0000 UTC m=+0.150984908 container init a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.410599469 +0000 UTC m=+0.157812949 container start a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.414555074 +0000 UTC m=+0.161768564 container attach a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:38:04] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:38:04] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:38:04] ENGINE Bus STARTED
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:38:04] ENGINE Bus STARTED
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Set ssh ssh_user
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb  2 04:38:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Set ssh ssh_config
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb  2 04:38:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb  2 04:38:04 np0005604790 nice_maxwell[75567]: ssh user set to ceph-admin. sudo will be used
Feb  2 04:38:04 np0005604790 systemd[1]: libpod-a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78.scope: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.791712384 +0000 UTC m=+0.538925924 container died a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:38:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ccb59af48312e54f8cea682aaff71135061e694613003090ddf88816231e2529-merged.mount: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75539]: 2026-02-02 09:38:04.832949999 +0000 UTC m=+0.580163479 container remove a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78 (image=quay.io/ceph/ceph:v19, name=nice_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:38:04 np0005604790 systemd[1]: libpod-conmon-a06a8693f90c056c2c7c82ff6e2e97f746d7e8df2d8841eb0135b9fb0fabee78.scope: Deactivated successfully.
Feb  2 04:38:04 np0005604790 podman[75605]: 2026-02-02 09:38:04.886095269 +0000 UTC m=+0.040012243 container create 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:38:04 np0005604790 systemd[1]: Started libpod-conmon-7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db.scope.
Feb  2 04:38:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:04 np0005604790 podman[75605]: 2026-02-02 09:38:04.868237825 +0000 UTC m=+0.022154789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:04 np0005604790 podman[75605]: 2026-02-02 09:38:04.988409465 +0000 UTC m=+0.142326479 container init 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:04 np0005604790 podman[75605]: 2026-02-02 09:38:04.995477722 +0000 UTC m=+0.149394696 container start 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:38:05 np0005604790 podman[75605]: 2026-02-02 09:38:04.999991472 +0000 UTC m=+0.153908506 container attach 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.djvyfo(active, since 2s)
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Set ssh ssh_identity_key
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Set ssh private key
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Set ssh private key
Feb  2 04:38:05 np0005604790 systemd[1]: libpod-7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db.scope: Deactivated successfully.
Feb  2 04:38:05 np0005604790 podman[75605]: 2026-02-02 09:38:05.402443403 +0000 UTC m=+0.556360337 container died 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb  2 04:38:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b452abaad28f1b5c8667e97bb825c9e7087f6699135e2296a07bc6ac276c36ef-merged.mount: Deactivated successfully.
Feb  2 04:38:05 np0005604790 podman[75605]: 2026-02-02 09:38:05.431690149 +0000 UTC m=+0.585607083 container remove 7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db (image=quay.io/ceph/ceph:v19, name=upbeat_varahamihira, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:38:05 np0005604790 systemd[1]: libpod-conmon-7ccd0f5e363f9dc2071623ed824fb8716eadc0bb9a06f88bfdbafdf2c4fc12db.scope: Deactivated successfully.
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.487160692 +0000 UTC m=+0.042556171 container create 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:05 np0005604790 systemd[1]: Started libpod-conmon-663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f.scope.
Feb  2 04:38:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.465201049 +0000 UTC m=+0.020596608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.573153294 +0000 UTC m=+0.128548793 container init 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.581249919 +0000 UTC m=+0.136645428 container start 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.585277316 +0000 UTC m=+0.140672845 container attach 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:38:04] ENGINE Bus STARTING
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:38:04] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:38:04] ENGINE Client ('192.168.122.100', 58502) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:38:04] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:38:04] ENGINE Bus STARTED
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: Set ssh ssh_user
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: Set ssh ssh_config
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: ssh user set to ceph-admin. sudo will be used
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926426 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb  2 04:38:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb  2 04:38:05 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb  2 04:38:05 np0005604790 systemd[1]: libpod-663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f.scope: Deactivated successfully.
Feb  2 04:38:05 np0005604790 podman[75659]: 2026-02-02 09:38:05.982548719 +0000 UTC m=+0.537944198 container died 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c6efd144a81f0cdfe0735d344b75057c18a41eee475490985baed6936547e602-merged.mount: Deactivated successfully.
Feb  2 04:38:06 np0005604790 podman[75659]: 2026-02-02 09:38:06.021667428 +0000 UTC m=+0.577062907 container remove 663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f (image=quay.io/ceph/ceph:v19, name=fervent_albattani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 04:38:06 np0005604790 systemd[1]: libpod-conmon-663df8b59f51394d8cc6864e87a418a2796451b292af04d9504a3041c6b28c3f.scope: Deactivated successfully.
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.095748534 +0000 UTC m=+0.050631185 container create c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:38:06 np0005604790 systemd[1]: Started libpod-conmon-c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582.scope.
Feb  2 04:38:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f1bd6418c2fe285616473d7b2c9594cf50d91e63c79c040f97ee3227dc43af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f1bd6418c2fe285616473d7b2c9594cf50d91e63c79c040f97ee3227dc43af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f1bd6418c2fe285616473d7b2c9594cf50d91e63c79c040f97ee3227dc43af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.079338798 +0000 UTC m=+0.034221439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.185334441 +0000 UTC m=+0.140217152 container init c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.191954727 +0000 UTC m=+0.146837378 container start c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.195901192 +0000 UTC m=+0.150783843 container attach c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:06 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:06 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:06 np0005604790 reverent_tesla[75730]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrFWy+Hf+oEb0UThsA6A45ogL45Y43yGWD6pXxArlXUscKIe9gG17MbWHHWhZc688WPExEBNNlCS9kCyFjyRGYpkv8vM6WEhGo+QY3lbPRXyS4j5c9x8eErlhnpWNsPK2mvYBsVbP7+QiJCN94wp5VZr9Y+jTgL1nKvPzb6Q5n4IMLOf7njNLz8oIGbI7VSNRsPARJHdQ26gyRlkpMzacNL2V8R/NumCOFAbzpF79PkUuTftmrZuApzLW3xKzrOJh4Rg2viXd3XRcF/PpwialSCCqZNnlJfN55XYZ38rIjVZZG2z9WnJ+btLgg0qQMyfHu4YSkuf/4b5I+Hg/cKR2pVo7j8VWE2iiNOBiFCpS9jt/+/SbElNfVBUugebZMa8CFx33aztcGdb/Nk02jF3NnxoTveUmR4HHHPrvJZSbNJblOboQ9Z8gO0o5cENw2WVHyvKlOUJKx3WhSiCHVgeeWQFXEJgAzqugrQ0BnHZgDzlcvEyzTuMcK8jpePP0DkmE= zuul@controller
Feb  2 04:38:06 np0005604790 systemd[1]: libpod-c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582.scope: Deactivated successfully.
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.594710726 +0000 UTC m=+0.549593357 container died c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:38:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-63f1bd6418c2fe285616473d7b2c9594cf50d91e63c79c040f97ee3227dc43af-merged.mount: Deactivated successfully.
Feb  2 04:38:06 np0005604790 podman[75714]: 2026-02-02 09:38:06.631200235 +0000 UTC m=+0.586082886 container remove c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582 (image=quay.io/ceph/ceph:v19, name=reverent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:38:06 np0005604790 systemd[1]: libpod-conmon-c9bec63ef04086375c1f308ccaf7334019e9fe0d93d8b0ee5b353c9745c68582.scope: Deactivated successfully.
Feb  2 04:38:06 np0005604790 podman[75769]: 2026-02-02 09:38:06.682279351 +0000 UTC m=+0.033630914 container create 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:38:06 np0005604790 systemd[1]: Started libpod-conmon-57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459.scope.
Feb  2 04:38:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99054214ad8020d826b2883975e42c2cdb0f7d0a346faf912fe2077b9bc188e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99054214ad8020d826b2883975e42c2cdb0f7d0a346faf912fe2077b9bc188e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99054214ad8020d826b2883975e42c2cdb0f7d0a346faf912fe2077b9bc188e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:06 np0005604790 podman[75769]: 2026-02-02 09:38:06.739378355 +0000 UTC m=+0.090729948 container init 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:06 np0005604790 podman[75769]: 2026-02-02 09:38:06.746236407 +0000 UTC m=+0.097587970 container start 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:06 np0005604790 podman[75769]: 2026-02-02 09:38:06.749613537 +0000 UTC m=+0.100965130 container attach 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:38:06 np0005604790 podman[75769]: 2026-02-02 09:38:06.665621788 +0000 UTC m=+0.016973371 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:06 np0005604790 ceph-mon[74489]: Set ssh ssh_identity_key
Feb  2 04:38:06 np0005604790 ceph-mon[74489]: Set ssh private key
Feb  2 04:38:06 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:06 np0005604790 ceph-mon[74489]: Set ssh ssh_identity_pub
Feb  2 04:38:07 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:07 np0005604790 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 04:38:07 np0005604790 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 04:38:07 np0005604790 systemd-logind[793]: New session 21 of user ceph-admin.
Feb  2 04:38:07 np0005604790 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 04:38:07 np0005604790 systemd[1]: Starting User Manager for UID 42477...
Feb  2 04:38:07 np0005604790 systemd[75816]: Queued start job for default target Main User Target.
Feb  2 04:38:07 np0005604790 systemd[75816]: Created slice User Application Slice.
Feb  2 04:38:07 np0005604790 systemd[75816]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:38:07 np0005604790 systemd[75816]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 04:38:07 np0005604790 systemd[75816]: Reached target Paths.
Feb  2 04:38:07 np0005604790 systemd[75816]: Reached target Timers.
Feb  2 04:38:07 np0005604790 systemd[75816]: Starting D-Bus User Message Bus Socket...
Feb  2 04:38:07 np0005604790 systemd[75816]: Starting Create User's Volatile Files and Directories...
Feb  2 04:38:07 np0005604790 systemd[75816]: Listening on D-Bus User Message Bus Socket.
Feb  2 04:38:07 np0005604790 systemd[75816]: Reached target Sockets.
Feb  2 04:38:07 np0005604790 systemd[75816]: Finished Create User's Volatile Files and Directories.
Feb  2 04:38:07 np0005604790 systemd[75816]: Reached target Basic System.
Feb  2 04:38:07 np0005604790 systemd[75816]: Reached target Main User Target.
Feb  2 04:38:07 np0005604790 systemd[75816]: Startup finished in 129ms.
Feb  2 04:38:07 np0005604790 systemd[1]: Started User Manager for UID 42477.
Feb  2 04:38:07 np0005604790 systemd[1]: Started Session 21 of User ceph-admin.
Feb  2 04:38:07 np0005604790 systemd-logind[793]: New session 23 of user ceph-admin.
Feb  2 04:38:07 np0005604790 systemd[1]: Started Session 23 of User ceph-admin.
Feb  2 04:38:07 np0005604790 systemd-logind[793]: New session 24 of user ceph-admin.
Feb  2 04:38:07 np0005604790 systemd[1]: Started Session 24 of User ceph-admin.
Feb  2 04:38:08 np0005604790 systemd-logind[793]: New session 25 of user ceph-admin.
Feb  2 04:38:08 np0005604790 systemd[1]: Started Session 25 of User ceph-admin.
Feb  2 04:38:08 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb  2 04:38:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb  2 04:38:08 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:08 np0005604790 systemd-logind[793]: New session 26 of user ceph-admin.
Feb  2 04:38:08 np0005604790 systemd[1]: Started Session 26 of User ceph-admin.
Feb  2 04:38:09 np0005604790 systemd-logind[793]: New session 27 of user ceph-admin.
Feb  2 04:38:09 np0005604790 systemd[1]: Started Session 27 of User ceph-admin.
Feb  2 04:38:09 np0005604790 systemd-logind[793]: New session 28 of user ceph-admin.
Feb  2 04:38:09 np0005604790 systemd[1]: Started Session 28 of User ceph-admin.
Feb  2 04:38:09 np0005604790 systemd-logind[793]: New session 29 of user ceph-admin.
Feb  2 04:38:09 np0005604790 systemd[1]: Started Session 29 of User ceph-admin.
Feb  2 04:38:10 np0005604790 ceph-mon[74489]: Deploying cephadm binary to compute-0
Feb  2 04:38:10 np0005604790 systemd-logind[793]: New session 30 of user ceph-admin.
Feb  2 04:38:10 np0005604790 systemd[1]: Started Session 30 of User ceph-admin.
Feb  2 04:38:10 np0005604790 systemd-logind[793]: New session 31 of user ceph-admin.
Feb  2 04:38:10 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:10 np0005604790 systemd[1]: Started Session 31 of User ceph-admin.
Feb  2 04:38:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053109 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:11 np0005604790 systemd-logind[793]: New session 32 of user ceph-admin.
Feb  2 04:38:11 np0005604790 systemd[1]: Started Session 32 of User ceph-admin.
Feb  2 04:38:11 np0005604790 systemd-logind[793]: New session 33 of user ceph-admin.
Feb  2 04:38:11 np0005604790 systemd[1]: Started Session 33 of User ceph-admin.
Feb  2 04:38:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:12 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Added host compute-0
Feb  2 04:38:12 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 04:38:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:38:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:38:12 np0005604790 ecstatic_bell[75786]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 04:38:12 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:12 np0005604790 systemd[1]: libpod-57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459.scope: Deactivated successfully.
Feb  2 04:38:12 np0005604790 podman[75769]: 2026-02-02 09:38:12.473637184 +0000 UTC m=+5.824988847 container died 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-99054214ad8020d826b2883975e42c2cdb0f7d0a346faf912fe2077b9bc188e0-merged.mount: Deactivated successfully.
Feb  2 04:38:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:13 np0005604790 ceph-mon[74489]: Added host compute-0
Feb  2 04:38:13 np0005604790 podman[75769]: 2026-02-02 09:38:13.548737868 +0000 UTC m=+6.900089451 container remove 57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459 (image=quay.io/ceph/ceph:v19, name=ecstatic_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:13 np0005604790 systemd[1]: libpod-conmon-57a840d4089d66420013653f4320a975ac635dda02c11b1674632932f4ff9459.scope: Deactivated successfully.
Feb  2 04:38:13 np0005604790 podman[76245]: 2026-02-02 09:38:13.637476453 +0000 UTC m=+0.066733572 container create e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:13 np0005604790 systemd[1]: Started libpod-conmon-e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607.scope.
Feb  2 04:38:13 np0005604790 podman[76245]: 2026-02-02 09:38:13.605971387 +0000 UTC m=+0.035228556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:13 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e033a3ac818f15e58e546ea1cf209725d84984ea2d5ff54c1c8ca63955dd2b33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e033a3ac818f15e58e546ea1cf209725d84984ea2d5ff54c1c8ca63955dd2b33/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e033a3ac818f15e58e546ea1cf209725d84984ea2d5ff54c1c8ca63955dd2b33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:13 np0005604790 podman[76245]: 2026-02-02 09:38:13.727431861 +0000 UTC m=+0.156689000 container init e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Feb  2 04:38:13 np0005604790 podman[76245]: 2026-02-02 09:38:13.733528973 +0000 UTC m=+0.162786102 container start e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:13 np0005604790 podman[76245]: 2026-02-02 09:38:13.736959224 +0000 UTC m=+0.166216343 container attach e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:14 np0005604790 romantic_keldysh[76275]: Scheduled mon update...
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76245]: 2026-02-02 09:38:14.090577598 +0000 UTC m=+0.519834707 container died e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e033a3ac818f15e58e546ea1cf209725d84984ea2d5ff54c1c8ca63955dd2b33-merged.mount: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76245]: 2026-02-02 09:38:14.146600395 +0000 UTC m=+0.575857524 container remove e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607 (image=quay.io/ceph/ceph:v19, name=romantic_keldysh, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:14 np0005604790 podman[76256]: 2026-02-02 09:38:14.151345141 +0000 UTC m=+0.550932142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-conmon-e68beaa9107e9b16dfc68e240ec53315ea9b7f534fbf68a45ec6be392c446607.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.217750883 +0000 UTC m=+0.050781209 container create 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.25304037 +0000 UTC m=+0.046600978 container create 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:38:14 np0005604790 systemd[1]: Started libpod-conmon-372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa.scope.
Feb  2 04:38:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8c54c4b625530595bc6f0fa2bd10650278b99bf2aae1cafe7cc5c610bd68f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8c54c4b625530595bc6f0fa2bd10650278b99bf2aae1cafe7cc5c610bd68f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a8c54c4b625530595bc6f0fa2bd10650278b99bf2aae1cafe7cc5c610bd68f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 systemd[1]: Started libpod-conmon-57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f.scope.
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.290515764 +0000 UTC m=+0.123546090 container init 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.197871985 +0000 UTC m=+0.030902351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.295516207 +0000 UTC m=+0.128546533 container start 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.300298864 +0000 UTC m=+0.133329180 container attach 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.305565674 +0000 UTC m=+0.099126282 container init 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.310018192 +0000 UTC m=+0.103578780 container start 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid)
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.314867501 +0000 UTC m=+0.108428079 container attach 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.22704706 +0000 UTC m=+0.020607668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:14 np0005604790 affectionate_ellis[76362]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.416139158 +0000 UTC m=+0.209699806 container died 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6b89b051c3f93a225a551efee979fe111e01dbd94f9bfa56712bfcd395de6206-merged.mount: Deactivated successfully.
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:14 np0005604790 podman[76338]: 2026-02-02 09:38:14.465633432 +0000 UTC m=+0.259194040 container remove 57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f (image=quay.io/ceph/ceph:v19, name=affectionate_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-conmon-57b37292529057cc25e43ee51aae8977a7da276afa317b2a38375241ac95a74f.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb  2 04:38:14 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:14 np0005604790 elastic_lederberg[76357]: Scheduled mgr update...
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.689389371 +0000 UTC m=+0.522419727 container died 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:38:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-66a8c54c4b625530595bc6f0fa2bd10650278b99bf2aae1cafe7cc5c610bd68f-merged.mount: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76313]: 2026-02-02 09:38:14.759421939 +0000 UTC m=+0.592452255 container remove 372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa (image=quay.io/ceph/ceph:v19, name=elastic_lederberg, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:14 np0005604790 systemd[1]: libpod-conmon-372869255bf944b40835ea0089eec814c9358f2af427c5349760d438f4b41daa.scope: Deactivated successfully.
Feb  2 04:38:14 np0005604790 podman[76463]: 2026-02-02 09:38:14.823910111 +0000 UTC m=+0.047904293 container create b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 04:38:14 np0005604790 systemd[1]: Started libpod-conmon-b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306.scope.
Feb  2 04:38:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368532eb1e3e7fbe6d15a357d9c8b2ecf580971d822f1d89916ed48f5548acf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368532eb1e3e7fbe6d15a357d9c8b2ecf580971d822f1d89916ed48f5548acf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368532eb1e3e7fbe6d15a357d9c8b2ecf580971d822f1d89916ed48f5548acf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:14 np0005604790 podman[76463]: 2026-02-02 09:38:14.801093385 +0000 UTC m=+0.025087647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:14 np0005604790 podman[76463]: 2026-02-02 09:38:14.900459912 +0000 UTC m=+0.124454094 container init b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:14 np0005604790 podman[76463]: 2026-02-02 09:38:14.904015337 +0000 UTC m=+0.128009509 container start b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 04:38:14 np0005604790 podman[76463]: 2026-02-02 09:38:14.907420127 +0000 UTC m=+0.131414399 container attach b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: Saving service mon spec with placement count:5
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: Saving service mgr spec with placement count:2
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:15 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service crash spec with placement *
Feb  2 04:38:15 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 interesting_kepler[76486]: Scheduled crash update...
Feb  2 04:38:15 np0005604790 systemd[1]: libpod-b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306.scope: Deactivated successfully.
Feb  2 04:38:15 np0005604790 podman[76463]: 2026-02-02 09:38:15.293827813 +0000 UTC m=+0.517822005 container died b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-368532eb1e3e7fbe6d15a357d9c8b2ecf580971d822f1d89916ed48f5548acf8-merged.mount: Deactivated successfully.
Feb  2 04:38:15 np0005604790 podman[76463]: 2026-02-02 09:38:15.344395295 +0000 UTC m=+0.568389487 container remove b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306 (image=quay.io/ceph/ceph:v19, name=interesting_kepler, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:38:15 np0005604790 systemd[1]: libpod-conmon-b1213752b1329a5ba45176d282d0ef64c3f470939a12e5821a7a5e8fe8067306.scope: Deactivated successfully.
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.397750841 +0000 UTC m=+0.037785774 container create a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:15 np0005604790 systemd[1]: Started libpod-conmon-a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e.scope.
Feb  2 04:38:15 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.379345482 +0000 UTC m=+0.019380395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17b34bd8d92698a2ef904660f0c14b84a3d015ad991ae78362938976ec312c7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17b34bd8d92698a2ef904660f0c14b84a3d015ad991ae78362938976ec312c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17b34bd8d92698a2ef904660f0c14b84a3d015ad991ae78362938976ec312c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.501985417 +0000 UTC m=+0.142020330 container init a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.507831752 +0000 UTC m=+0.147866655 container start a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.512912467 +0000 UTC m=+0.152947360 container attach a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:15 np0005604790 podman[76670]: 2026-02-02 09:38:15.587962059 +0000 UTC m=+0.074605501 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:38:15 np0005604790 podman[76670]: 2026-02-02 09:38:15.71684414 +0000 UTC m=+0.203487482 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb  2 04:38:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/64288796' entity='client.admin' 
Feb  2 04:38:15 np0005604790 systemd[1]: libpod-a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e.scope: Deactivated successfully.
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.895876542 +0000 UTC m=+0.535911465 container died a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:38:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f17b34bd8d92698a2ef904660f0c14b84a3d015ad991ae78362938976ec312c7-merged.mount: Deactivated successfully.
Feb  2 04:38:15 np0005604790 podman[76627]: 2026-02-02 09:38:15.941300027 +0000 UTC m=+0.581334920 container remove a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e (image=quay.io/ceph/ceph:v19, name=serene_robinson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:38:15 np0005604790 systemd[1]: libpod-conmon-a3b1190bd00c1609ad81e7506711de12e1271d1c850d976cac09cf7a4ace936e.scope: Deactivated successfully.
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.006258271 +0000 UTC m=+0.049276699 container create b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:16 np0005604790 systemd[1]: Started libpod-conmon-b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae.scope.
Feb  2 04:38:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f36898721cc3668dab766c92b76d1df9ed0c0fd8b36bed20ddc434bd0946e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f36898721cc3668dab766c92b76d1df9ed0c0fd8b36bed20ddc434bd0946e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f36898721cc3668dab766c92b76d1df9ed0c0fd8b36bed20ddc434bd0946e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:15.982722257 +0000 UTC m=+0.025740725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.091755491 +0000 UTC m=+0.134773959 container init b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.097597015 +0000 UTC m=+0.140615473 container start b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.108042583 +0000 UTC m=+0.151061051 container attach b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:16 np0005604790 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76843 (sysctl)
Feb  2 04:38:16 np0005604790 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb  2 04:38:16 np0005604790 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: Saving service crash spec with placement *
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/64288796' entity='client.admin' 
Feb  2 04:38:16 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:16 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:16 np0005604790 systemd[1]: libpod-b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae.scope: Deactivated successfully.
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.527409093 +0000 UTC m=+0.570427531 container died b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:38:16 np0005604790 systemd[1]: var-lib-containers-storage-overlay-01f36898721cc3668dab766c92b76d1df9ed0c0fd8b36bed20ddc434bd0946e5-merged.mount: Deactivated successfully.
Feb  2 04:38:16 np0005604790 podman[76789]: 2026-02-02 09:38:16.579123025 +0000 UTC m=+0.622141483 container remove b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae (image=quay.io/ceph/ceph:v19, name=condescending_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:16 np0005604790 systemd[1]: libpod-conmon-b836a3e7ccd53e0969a134060ed8bdc4cdd5df794e8d1da4984193b07761aeae.scope: Deactivated successfully.
Feb  2 04:38:16 np0005604790 podman[76947]: 2026-02-02 09:38:16.635253625 +0000 UTC m=+0.038640647 container create b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:38:16 np0005604790 systemd[1]: Started libpod-conmon-b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7.scope.
Feb  2 04:38:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be8a5512eb1fc1ea7d2f7e8cd0bd02dbae6e3f12bc462ba516b500960c05cac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be8a5512eb1fc1ea7d2f7e8cd0bd02dbae6e3f12bc462ba516b500960c05cac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be8a5512eb1fc1ea7d2f7e8cd0bd02dbae6e3f12bc462ba516b500960c05cac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:16 np0005604790 podman[76947]: 2026-02-02 09:38:16.616062876 +0000 UTC m=+0.019449928 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:16 np0005604790 podman[76947]: 2026-02-02 09:38:16.729947918 +0000 UTC m=+0.133334970 container init b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:38:16 np0005604790 podman[76947]: 2026-02-02 09:38:16.738641159 +0000 UTC m=+0.142028181 container start b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 04:38:16 np0005604790 podman[76947]: 2026-02-02 09:38:16.743535019 +0000 UTC m=+0.146922041 container attach b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:17 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:17 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Added label _admin to host compute-0
Feb  2 04:38:17 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb  2 04:38:17 np0005604790 friendly_banach[76965]: Added label _admin to host compute-0
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7.scope: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[76947]: 2026-02-02 09:38:17.11399001 +0000 UTC m=+0.517377092 container died b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8be8a5512eb1fc1ea7d2f7e8cd0bd02dbae6e3f12bc462ba516b500960c05cac-merged.mount: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[76947]: 2026-02-02 09:38:17.163255218 +0000 UTC m=+0.566642280 container remove b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7 (image=quay.io/ceph/ceph:v19, name=friendly_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-conmon-b03bc59b4d957b1726622b6fc08adc374440dd659e20f2e1b3a505ce30b3e6f7.scope: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.229127016 +0000 UTC m=+0.047110131 container create f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:38:17 np0005604790 systemd[1]: Started libpod-conmon-f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a.scope.
Feb  2 04:38:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec849a8eebdcf9fc8df88728930edc01a8fd12ef3ea769ea70627a3e7b3343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec849a8eebdcf9fc8df88728930edc01a8fd12ef3ea769ea70627a3e7b3343/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec849a8eebdcf9fc8df88728930edc01a8fd12ef3ea769ea70627a3e7b3343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.205743715 +0000 UTC m=+0.023726890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.303941982 +0000 UTC m=+0.121925117 container init f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.31029985 +0000 UTC m=+0.128282935 container start f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.31404345 +0000 UTC m=+0.132026625 container attach f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.441028859 +0000 UTC m=+0.049800582 container create 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 04:38:17 np0005604790 systemd[1]: Started libpod-conmon-6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186.scope.
Feb  2 04:38:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.505976573 +0000 UTC m=+0.114748386 container init 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.513714128 +0000 UTC m=+0.122485881 container start 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:17 np0005604790 jolly_greider[77164]: 167 167
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.420978188 +0000 UTC m=+0.029749991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186.scope: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.517094928 +0000 UTC m=+0.125866681 container attach 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.517412946 +0000 UTC m=+0.126184699 container died 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:38:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-82531f189b5e7d7c20b39f28d7392910d8afa9dd5ff686d81eeaf02d5d106ee2-merged.mount: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[77129]: 2026-02-02 09:38:17.558963049 +0000 UTC m=+0.167734812 container remove 6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-conmon-6c1108d9f4bfff08e0b4cba86a8ea72441b0b2fa61a4f9ad86b7f3bb432a4186.scope: Deactivated successfully.
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb  2 04:38:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4270388054' entity='client.admin' 
Feb  2 04:38:17 np0005604790 quizzical_matsumoto[77110]: set mgr/dashboard/cluster/status
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a.scope: Deactivated successfully.
Feb  2 04:38:17 np0005604790 conmon[77110]: conmon f47c242a9adacf925073 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a.scope/container/memory.events
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.814239894 +0000 UTC m=+0.632223009 container died f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-04ec849a8eebdcf9fc8df88728930edc01a8fd12ef3ea769ea70627a3e7b3343-merged.mount: Deactivated successfully.
Feb  2 04:38:17 np0005604790 podman[77069]: 2026-02-02 09:38:17.861484328 +0000 UTC m=+0.679467463 container remove f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a (image=quay.io/ceph/ceph:v19, name=quizzical_matsumoto, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:17 np0005604790 systemd[1]: libpod-conmon-f47c242a9adacf92507326cce91ee39bded9da8ed69a7dd18b3a7ae7fd8a567a.scope: Deactivated successfully.
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.055833206 +0000 UTC m=+0.052155095 container create b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:18 np0005604790 systemd[1]: Started libpod-conmon-b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c.scope.
Feb  2 04:38:18 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.033564705 +0000 UTC m=+0.029886634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ff67e8a21c34d50dd02f287015cdbd60316bb6efcc19995c50cbf7c332aac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ff67e8a21c34d50dd02f287015cdbd60316bb6efcc19995c50cbf7c332aac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ff67e8a21c34d50dd02f287015cdbd60316bb6efcc19995c50cbf7c332aac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932ff67e8a21c34d50dd02f287015cdbd60316bb6efcc19995c50cbf7c332aac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.146519693 +0000 UTC m=+0.142841562 container init b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.155249195 +0000 UTC m=+0.151571044 container start b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.158889431 +0000 UTC m=+0.155211310 container attach b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 04:38:18 np0005604790 python3[77251]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.454452246 +0000 UTC m=+0.041282507 container create da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:38:18 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:18 np0005604790 systemd[1]: Started libpod-conmon-da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb.scope.
Feb  2 04:38:18 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601d780d7e4f68956b42d7c3ad3ed1ec6bf10cc2af3157a5a347b0300ef15b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c601d780d7e4f68956b42d7c3ad3ed1ec6bf10cc2af3157a5a347b0300ef15b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.441233245 +0000 UTC m=+0.028063526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.542240346 +0000 UTC m=+0.129070607 container init da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.549788966 +0000 UTC m=+0.136619237 container start da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.553371651 +0000 UTC m=+0.140201912 container attach da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]: [
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:    {
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "available": false,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "being_replaced": false,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "ceph_device_lvm": false,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "lsm_data": {},
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "lvs": [],
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "path": "/dev/sr0",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "rejected_reasons": [
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "Insufficient space (<5GB)",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "Has a FileSystem"
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        ],
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        "sys_api": {
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "actuators": null,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "device_nodes": [
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:                "sr0"
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            ],
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "devname": "sr0",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "human_readable_size": "482.00 KB",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "id_bus": "ata",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "model": "QEMU DVD-ROM",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "nr_requests": "2",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "parent": "/dev/sr0",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "partitions": {},
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "path": "/dev/sr0",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "removable": "1",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "rev": "2.5+",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "ro": "0",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "rotational": "1",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "sas_address": "",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "sas_device_handle": "",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "scheduler_mode": "mq-deadline",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "sectors": 0,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "sectorsize": "2048",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "size": 493568.0,
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "support_discard": "2048",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "type": "disk",
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:            "vendor": "QEMU"
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:        }
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]:    }
Feb  2 04:38:18 np0005604790 exciting_kapitsa[77221]: ]
Feb  2 04:38:18 np0005604790 systemd[1]: libpod-b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c.scope: Deactivated successfully.
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.78619343 +0000 UTC m=+0.782515289 container died b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: Added label _admin to host compute-0
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4270388054' entity='client.admin' 
Feb  2 04:38:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay-932ff67e8a21c34d50dd02f287015cdbd60316bb6efcc19995c50cbf7c332aac-merged.mount: Deactivated successfully.
Feb  2 04:38:18 np0005604790 podman[77204]: 2026-02-02 09:38:18.819033972 +0000 UTC m=+0.815355821 container remove b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:38:18 np0005604790 systemd[1]: libpod-conmon-b109e31f738261d8e75f35b3b14457239a03cb221dffffca1c86253c8abee83c.scope: Deactivated successfully.
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:38:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb  2 04:38:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2186345723' entity='client.admin' 
Feb  2 04:38:18 np0005604790 systemd[1]: libpod-da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb.scope: Deactivated successfully.
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.930421318 +0000 UTC m=+0.517251579 container died da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:38:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c601d780d7e4f68956b42d7c3ad3ed1ec6bf10cc2af3157a5a347b0300ef15b8-merged.mount: Deactivated successfully.
Feb  2 04:38:18 np0005604790 podman[77257]: 2026-02-02 09:38:18.95876935 +0000 UTC m=+0.545599611 container remove da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb (image=quay.io/ceph/ceph:v19, name=gallant_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 04:38:18 np0005604790 systemd[1]: libpod-conmon-da70b72966da994dd1e5e349f6db349b48751f2c7bdaf9fe82dadc987e1877cb.scope: Deactivated successfully.
Feb  2 04:38:19 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:19 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:19 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:19 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:19 np0005604790 ansible-async_wrapper.py[78847]: Invoked with j930676914269 30 /home/zuul/.ansible/tmp/ansible-tmp-1770025099.2454047-37150-153085895591851/AnsiballZ_command.py _
Feb  2 04:38:19 np0005604790 ansible-async_wrapper.py[78907]: Starting module and watcher
Feb  2 04:38:19 np0005604790 ansible-async_wrapper.py[78907]: Start watching 78909 (30)
Feb  2 04:38:19 np0005604790 ansible-async_wrapper.py[78909]: Start module (78909)
Feb  2 04:38:19 np0005604790 ansible-async_wrapper.py[78847]: Return async_wrapper task started.
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:38:19 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2186345723' entity='client.admin' 
Feb  2 04:38:19 np0005604790 python3[78913]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:19 np0005604790 podman[78985]: 2026-02-02 09:38:19.92630554 +0000 UTC m=+0.036102470 container create 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 04:38:19 np0005604790 systemd[1]: Started libpod-conmon-923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019.scope.
Feb  2 04:38:19 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771680e4db27f7841737acb64d9ac11dc888edfd01f710eb0249d22f365715c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/771680e4db27f7841737acb64d9ac11dc888edfd01f710eb0249d22f365715c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:19 np0005604790 podman[78985]: 2026-02-02 09:38:19.975396532 +0000 UTC m=+0.085193472 container init 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 04:38:19 np0005604790 podman[78985]: 2026-02-02 09:38:19.979901272 +0000 UTC m=+0.089698192 container start 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:19 np0005604790 podman[78985]: 2026-02-02 09:38:19.982681066 +0000 UTC m=+0.092478006 container attach 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:20 np0005604790 podman[78985]: 2026-02-02 09:38:19.910806798 +0000 UTC m=+0.020603758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:38:20 np0005604790 angry_galois[79042]: 
Feb  2 04:38:20 np0005604790 angry_galois[79042]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 04:38:20 np0005604790 systemd[1]: libpod-923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019.scope: Deactivated successfully.
Feb  2 04:38:20 np0005604790 podman[78985]: 2026-02-02 09:38:20.385606229 +0000 UTC m=+0.495403179 container died 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:38:20 np0005604790 systemd[1]: var-lib-containers-storage-overlay-771680e4db27f7841737acb64d9ac11dc888edfd01f710eb0249d22f365715c5-merged.mount: Deactivated successfully.
Feb  2 04:38:20 np0005604790 podman[78985]: 2026-02-02 09:38:20.435439752 +0000 UTC m=+0.545236702 container remove 923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019 (image=quay.io/ceph/ceph:v19, name=angry_galois, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:38:20 np0005604790 systemd[1]: libpod-conmon-923d0d9a6f69e6301dc1512b79e43791c1a6de9df1d54f511e562a2e99b70019.scope: Deactivated successfully.
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb  2 04:38:20 np0005604790 ansible-async_wrapper.py[78909]: Module complete (78909)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 600f7a27-e6c6-4ce7-9269-1a26435a45b0 (Updating crash deployment (+1 -> 1))
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb  2 04:38:20 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:38:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.028446515 +0000 UTC m=+0.046063847 container create ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:38:21 np0005604790 systemd[1]: Started libpod-conmon-ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988.scope.
Feb  2 04:38:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.011988187 +0000 UTC m=+0.029605499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.11416826 +0000 UTC m=+0.131785652 container init ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.121032603 +0000 UTC m=+0.138649935 container start ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:21 np0005604790 funny_hypatia[79509]: 167 167
Feb  2 04:38:21 np0005604790 systemd[1]: libpod-ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988.scope: Deactivated successfully.
Feb  2 04:38:21 np0005604790 conmon[79509]: conmon ee49907e398caab9e16b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988.scope/container/memory.events
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.125610595 +0000 UTC m=+0.143227937 container attach ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.126116489 +0000 UTC m=+0.143733821 container died ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:21 np0005604790 python3[79494]: ansible-ansible.legacy.async_status Invoked with jid=j930676914269.78847 mode=status _async_dir=/root/.ansible_async
Feb  2 04:38:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4d0ecbc86d5fa312451dc20db584f89a69645966d99da2a6b69e34caefa526e8-merged.mount: Deactivated successfully.
Feb  2 04:38:21 np0005604790 podman[79492]: 2026-02-02 09:38:21.164959344 +0000 UTC m=+0.182576656 container remove ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:38:21 np0005604790 systemd[1]: libpod-conmon-ee49907e398caab9e16b03a34da9afeb22f92012421b1ae71420bab1981a6988.scope: Deactivated successfully.
Feb  2 04:38:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:38:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:38:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:38:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:38:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:38:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:38:21 np0005604790 python3[79612]: ansible-ansible.legacy.async_status Invoked with jid=j930676914269.78847 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 04:38:21 np0005604790 systemd[1]: Starting Ceph crash.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:38:21 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:21 np0005604790 ceph-mon[74489]: Deploying daemon crash.compute-0 on compute-0
Feb  2 04:38:21 np0005604790 podman[79724]: 2026-02-02 09:38:21.980872141 +0000 UTC m=+0.057841423 container create 318ef38b81cae6eaaebf216bc863b04a4bef5216fba3bfba4e81b73ac8904bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8e6085b4a2eecf6ab7232b6a1785b68e895e99877332b0150de6baf16110b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8e6085b4a2eecf6ab7232b6a1785b68e895e99877332b0150de6baf16110b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8e6085b4a2eecf6ab7232b6a1785b68e895e99877332b0150de6baf16110b/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e8e6085b4a2eecf6ab7232b6a1785b68e895e99877332b0150de6baf16110b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 podman[79724]: 2026-02-02 09:38:21.954748265 +0000 UTC m=+0.031717587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:22 np0005604790 podman[79724]: 2026-02-02 09:38:22.051795501 +0000 UTC m=+0.128764834 container init 318ef38b81cae6eaaebf216bc863b04a4bef5216fba3bfba4e81b73ac8904bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:22 np0005604790 podman[79724]: 2026-02-02 09:38:22.061928262 +0000 UTC m=+0.138897544 container start 318ef38b81cae6eaaebf216bc863b04a4bef5216fba3bfba4e81b73ac8904bc2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:22 np0005604790 bash[79724]: 318ef38b81cae6eaaebf216bc863b04a4bef5216fba3bfba4e81b73ac8904bc2
Feb  2 04:38:22 np0005604790 systemd[1]: Started Ceph crash.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: INFO:ceph-crash:pinging cluster to exercise our key
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:38:22 np0005604790 python3[79731]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 600f7a27-e6c6-4ce7-9269-1a26435a45b0 (Updating crash deployment (+1 -> 1))
Feb  2 04:38:22 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 600f7a27-e6c6-4ce7-9269-1a26435a45b0 (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.229+0000 7f1b36d78640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.229+0000 7f1b36d78640 -1 AuthRegistry(0x7f1b300698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.231+0000 7f1b36d78640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.231+0000 7f1b36d78640 -1 AuthRegistry(0x7f1b36d76ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.232+0000 7f1b34aed640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: 2026-02-02T09:38:22.232+0000 7f1b36d78640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb  2 04:38:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb  2 04:38:22 np0005604790 ceph-mgr[74785]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb  2 04:38:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 04:38:22 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 1 completed events
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:38:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:22 np0005604790 python3[79858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:22 np0005604790 podman[79887]: 2026-02-02 09:38:22.714933027 +0000 UTC m=+0.044852017 container create 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:22 np0005604790 systemd[1]: Started libpod-conmon-4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058.scope.
Feb  2 04:38:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4827f2d997d8a8bae20cc26d13c5b75f87213667e172764c201fd30b5f7388/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4827f2d997d8a8bae20cc26d13c5b75f87213667e172764c201fd30b5f7388/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d4827f2d997d8a8bae20cc26d13c5b75f87213667e172764c201fd30b5f7388/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:22 np0005604790 podman[79887]: 2026-02-02 09:38:22.697332038 +0000 UTC m=+0.027251038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:22 np0005604790 podman[79887]: 2026-02-02 09:38:22.823385567 +0000 UTC m=+0.153304577 container init 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:38:22 np0005604790 podman[79887]: 2026-02-02 09:38:22.828071572 +0000 UTC m=+0.157990552 container start 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:22 np0005604790 podman[79887]: 2026-02-02 09:38:22.841524951 +0000 UTC m=+0.171443931 container attach 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:22 np0005604790 podman[79950]: 2026-02-02 09:38:22.944553277 +0000 UTC m=+0.068798345 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:23 np0005604790 podman[79950]: 2026-02-02 09:38:23.038223204 +0000 UTC m=+0.162468272 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:38:23 np0005604790 focused_ptolemy[79932]: 
Feb  2 04:38:23 np0005604790 focused_ptolemy[79932]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 04:38:23 np0005604790 systemd[1]: libpod-4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058.scope: Deactivated successfully.
Feb  2 04:38:23 np0005604790 conmon[79932]: conmon 4aab17afe540c7443817 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058.scope/container/memory.events
Feb  2 04:38:23 np0005604790 podman[79887]: 2026-02-02 09:38:23.217943134 +0000 UTC m=+0.547862124 container died 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7d4827f2d997d8a8bae20cc26d13c5b75f87213667e172764c201fd30b5f7388-merged.mount: Deactivated successfully.
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:23 np0005604790 podman[79887]: 2026-02-02 09:38:23.30670918 +0000 UTC m=+0.636628190 container remove 4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058 (image=quay.io/ceph/ceph:v19, name=focused_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 systemd[1]: libpod-conmon-4aab17afe540c744381758af6ec45c161a57ffc51d1d6d963bbb660bb6260058.scope: Deactivated successfully.
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 04:38:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:38:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:38:23 np0005604790 python3[80149]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.829449323 +0000 UTC m=+0.037356117 container create 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:38:23 np0005604790 systemd[1]: Started libpod-conmon-34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e.scope.
Feb  2 04:38:23 np0005604790 podman[80176]: 2026-02-02 09:38:23.863992264 +0000 UTC m=+0.049178302 container create 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:38:23 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:23 np0005604790 systemd[1]: Started libpod-conmon-4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914.scope.
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.905408378 +0000 UTC m=+0.113315202 container init 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.814365441 +0000 UTC m=+0.022272235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:23 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.915289081 +0000 UTC m=+0.123195875 container start 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef20e406024645da2d3a283ba185e86bd6ff9e3ea8f2e35aedfb67afd0bbb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef20e406024645da2d3a283ba185e86bd6ff9e3ea8f2e35aedfb67afd0bbb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28ef20e406024645da2d3a283ba185e86bd6ff9e3ea8f2e35aedfb67afd0bbb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.922406841 +0000 UTC m=+0.130313655 container attach 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:23 np0005604790 pensive_noyce[80195]: 167 167
Feb  2 04:38:23 np0005604790 systemd[1]: libpod-34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e.scope: Deactivated successfully.
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.926351406 +0000 UTC m=+0.134258190 container died 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:23 np0005604790 podman[80176]: 2026-02-02 09:38:23.840383784 +0000 UTC m=+0.025569842 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:23 np0005604790 podman[80176]: 2026-02-02 09:38:23.938444258 +0000 UTC m=+0.123630276 container init 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:23 np0005604790 podman[80176]: 2026-02-02 09:38:23.945974179 +0000 UTC m=+0.131160177 container start 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-900a01599ce3bc62ee9ef64a0e523d48eacc144b346c516275e7d02e63f9e3fa-merged.mount: Deactivated successfully.
Feb  2 04:38:23 np0005604790 podman[80176]: 2026-02-02 09:38:23.948695781 +0000 UTC m=+0.133881779 container attach 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:23 np0005604790 podman[80166]: 2026-02-02 09:38:23.976918514 +0000 UTC m=+0.184825308 container remove 34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e (image=quay.io/ceph/ceph:v19, name=pensive_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:38:23 np0005604790 systemd[1]: libpod-conmon-34d3bc99256742c098d4f4b2a1fdab03be388f3ced0a86950f7167f59aa2b16e.scope: Deactivated successfully.
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.djvyfo (unknown last config time)...
Feb  2 04:38:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.djvyfo (unknown last config time)...
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:38:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: Reconfiguring mgr.compute-0.djvyfo (unknown last config time)...
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2002590266' entity='client.admin' 
Feb  2 04:38:24 np0005604790 systemd[1]: libpod-4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914.scope: Deactivated successfully.
Feb  2 04:38:24 np0005604790 podman[80176]: 2026-02-02 09:38:24.301119415 +0000 UTC m=+0.486305413 container died 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:38:24 np0005604790 systemd[1]: var-lib-containers-storage-overlay-28ef20e406024645da2d3a283ba185e86bd6ff9e3ea8f2e35aedfb67afd0bbb1-merged.mount: Deactivated successfully.
Feb  2 04:38:24 np0005604790 podman[80176]: 2026-02-02 09:38:24.332046009 +0000 UTC m=+0.517232007 container remove 4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914 (image=quay.io/ceph/ceph:v19, name=jolly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:38:24 np0005604790 systemd[1]: libpod-conmon-4cb7a32942aca81d444143f200e11e3e49c5fe4062cd61d5e6bce06a64332914.scope: Deactivated successfully.
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.396531988 +0000 UTC m=+0.039936765 container create ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:24 np0005604790 systemd[1]: Started libpod-conmon-ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505.scope.
Feb  2 04:38:24 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.461061958 +0000 UTC m=+0.104466755 container init ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.467612663 +0000 UTC m=+0.111017420 container start ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:24 np0005604790 systemd[1]: libpod-ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505.scope: Deactivated successfully.
Feb  2 04:38:24 np0005604790 conmon[80333]: conmon ba858d62abc166b0b816 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505.scope/container/memory.events
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.470619623 +0000 UTC m=+0.114024380 container attach ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:24 np0005604790 great_chaum[80333]: 167 167
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.471819295 +0000 UTC m=+0.115224042 container died ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.376637348 +0000 UTC m=+0.020042145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:24 np0005604790 podman[80317]: 2026-02-02 09:38:24.500863919 +0000 UTC m=+0.144268666 container remove ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505 (image=quay.io/ceph/ceph:v19, name=great_chaum, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:24 np0005604790 systemd[1]: libpod-conmon-ba858d62abc166b0b816896762871ca9443e80e72e47e52f53bb8c604cee9505.scope: Deactivated successfully.
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:24 np0005604790 python3[80362]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:24 np0005604790 podman[80396]: 2026-02-02 09:38:24.62362263 +0000 UTC m=+0.031018428 container create ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:24 np0005604790 systemd[1]: Started libpod-conmon-ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324.scope.
Feb  2 04:38:24 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90cca633870257994bc1a991c53264f8602bfd8745af2aa34b6ede20989825/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90cca633870257994bc1a991c53264f8602bfd8745af2aa34b6ede20989825/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c90cca633870257994bc1a991c53264f8602bfd8745af2aa34b6ede20989825/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:24 np0005604790 podman[80396]: 2026-02-02 09:38:24.692124146 +0000 UTC m=+0.099519964 container init ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:24 np0005604790 podman[80396]: 2026-02-02 09:38:24.698722362 +0000 UTC m=+0.106118160 container start ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:38:24 np0005604790 podman[80396]: 2026-02-02 09:38:24.702001799 +0000 UTC m=+0.109397597 container attach ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Feb  2 04:38:24 np0005604790 podman[80396]: 2026-02-02 09:38:24.609392651 +0000 UTC m=+0.016788469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:24 np0005604790 ansible-async_wrapper.py[78907]: Done in kid B.
Feb  2 04:38:24 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a8004c06ae5931539a059197290a2b900eeaff8ac9c33bc0ee815b28caf7938e-merged.mount: Deactivated successfully.
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/612492428' entity='client.admin' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:25 np0005604790 systemd[1]: libpod-ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324.scope: Deactivated successfully.
Feb  2 04:38:25 np0005604790 podman[80396]: 2026-02-02 09:38:25.132532494 +0000 UTC m=+0.539928382 container died ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:25 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5c90cca633870257994bc1a991c53264f8602bfd8745af2aa34b6ede20989825-merged.mount: Deactivated successfully.
Feb  2 04:38:25 np0005604790 podman[80396]: 2026-02-02 09:38:25.188224629 +0000 UTC m=+0.595620437 container remove ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324 (image=quay.io/ceph/ceph:v19, name=trusting_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 04:38:25 np0005604790 systemd[1]: libpod-conmon-ddabcc116319cc672dbc440adce7ac9d21feae6f6d904605e9206d93a7b33324.scope: Deactivated successfully.
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2002590266' entity='client.admin' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/612492428' entity='client.admin' 
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:25 np0005604790 python3[80505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:25 np0005604790 podman[80506]: 2026-02-02 09:38:25.636669212 +0000 UTC m=+0.044806516 container create 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 04:38:25 np0005604790 systemd[1]: Started libpod-conmon-6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8.scope.
Feb  2 04:38:25 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4e03f5017e7eec5c3c0921eff05e5c4dfd22652bfc7256133f6abd2d7547c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4e03f5017e7eec5c3c0921eff05e5c4dfd22652bfc7256133f6abd2d7547c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da4e03f5017e7eec5c3c0921eff05e5c4dfd22652bfc7256133f6abd2d7547c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:25 np0005604790 podman[80506]: 2026-02-02 09:38:25.706122183 +0000 UTC m=+0.114259497 container init 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:38:25 np0005604790 podman[80506]: 2026-02-02 09:38:25.612246271 +0000 UTC m=+0.020383675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:25 np0005604790 podman[80506]: 2026-02-02 09:38:25.712506083 +0000 UTC m=+0.120643387 container start 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:25 np0005604790 podman[80506]: 2026-02-02 09:38:25.716398497 +0000 UTC m=+0.124535801 container attach 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 04:38:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3821419342' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3821419342' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3821419342' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb  2 04:38:26 np0005604790 naughty_franklin[80522]: set require_min_compat_client to mimic
Feb  2 04:38:26 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb  2 04:38:26 np0005604790 systemd[1]: libpod-6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8.scope: Deactivated successfully.
Feb  2 04:38:26 np0005604790 podman[80506]: 2026-02-02 09:38:26.327905086 +0000 UTC m=+0.736042390 container died 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5da4e03f5017e7eec5c3c0921eff05e5c4dfd22652bfc7256133f6abd2d7547c-merged.mount: Deactivated successfully.
Feb  2 04:38:26 np0005604790 podman[80506]: 2026-02-02 09:38:26.362509168 +0000 UTC m=+0.770646482 container remove 6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8 (image=quay.io/ceph/ceph:v19, name=naughty_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:26 np0005604790 systemd[1]: libpod-conmon-6cece5e1c845709f58461d320b70f568aaf89ad10ed2e785c3d74935ea33c5f8.scope: Deactivated successfully.
Feb  2 04:38:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:26 np0005604790 python3[80583]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:27 np0005604790 podman[80584]: 2026-02-02 09:38:27.0386383 +0000 UTC m=+0.051523825 container create 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:38:27 np0005604790 systemd[1]: Started libpod-conmon-5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d.scope.
Feb  2 04:38:27 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3c66a4998254b731d370cf50babe323ac76abe46f19ec34532bf3eb0a02da2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3c66a4998254b731d370cf50babe323ac76abe46f19ec34532bf3eb0a02da2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e3c66a4998254b731d370cf50babe323ac76abe46f19ec34532bf3eb0a02da2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:27 np0005604790 podman[80584]: 2026-02-02 09:38:27.010923661 +0000 UTC m=+0.023809276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:27 np0005604790 podman[80584]: 2026-02-02 09:38:27.114640925 +0000 UTC m=+0.127526540 container init 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:27 np0005604790 podman[80584]: 2026-02-02 09:38:27.12154922 +0000 UTC m=+0.134434785 container start 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:38:27 np0005604790 podman[80584]: 2026-02-02 09:38:27.125822503 +0000 UTC m=+0.138708118 container attach 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3821419342' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb  2 04:38:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:27 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Added host compute-0
Feb  2 04:38:27 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Added host compute-0
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: Added host compute-0
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:28 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Feb  2 04:38:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Feb  2 04:38:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:31 np0005604790 ceph-mon[74489]: Deploying cephadm binary to compute-1
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:38:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:33 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Added host compute-1
Feb  2 04:38:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Added host compute-1
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:34 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:34 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:34 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:34 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Feb  2 04:38:34 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Feb  2 04:38:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:35 np0005604790 ceph-mon[74489]: Added host compute-1
Feb  2 04:38:35 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:36 np0005604790 ceph-mon[74489]: Deploying cephadm binary to compute-2
Feb  2 04:38:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Added host compute-2
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Added host compute-2
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb  2 04:38:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Added host 'compute-0' with addr '192.168.122.100'
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Added host 'compute-1' with addr '192.168.122.101'
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Added host 'compute-2' with addr '192.168.122.102'
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Scheduled mon update...
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Scheduled mgr update...
Feb  2 04:38:38 np0005604790 vigorous_matsumoto[80599]: Scheduled osd.default_drive_group update...
Feb  2 04:38:38 np0005604790 systemd[1]: libpod-5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d.scope: Deactivated successfully.
Feb  2 04:38:38 np0005604790 podman[80584]: 2026-02-02 09:38:38.407983624 +0000 UTC m=+11.420869149 container died 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:38:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2e3c66a4998254b731d370cf50babe323ac76abe46f19ec34532bf3eb0a02da2-merged.mount: Deactivated successfully.
Feb  2 04:38:38 np0005604790 podman[80584]: 2026-02-02 09:38:38.439960847 +0000 UTC m=+11.452846372 container remove 5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d (image=quay.io/ceph/ceph:v19, name=vigorous_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 04:38:38 np0005604790 systemd[1]: libpod-conmon-5d8df8aa296813288e56273c49612c8bb0969fbe9cf48e847c33f415d834367d.scope: Deactivated successfully.
Feb  2 04:38:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:38 np0005604790 python3[80755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:38:38 np0005604790 podman[80757]: 2026-02-02 09:38:38.946013894 +0000 UTC m=+0.054987046 container create 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 04:38:38 np0005604790 systemd[1]: Started libpod-conmon-27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab.scope.
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:38.923694329 +0000 UTC m=+0.032667521 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:38:39 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c30aea9053d975834c205d91baab45a4a74fc0542644b6c13cb92014a8bfb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c30aea9053d975834c205d91baab45a4a74fc0542644b6c13cb92014a8bfb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75c30aea9053d975834c205d91baab45a4a74fc0542644b6c13cb92014a8bfb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:39.045051484 +0000 UTC m=+0.154024666 container init 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:39.049070991 +0000 UTC m=+0.158044143 container start 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:39.052698848 +0000 UTC m=+0.161672020 container attach 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Added host compute-2
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Saving service mon spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Marking host: compute-0 for OSDSpec preview refresh.
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Marking host: compute-1 for OSDSpec preview refresh.
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 04:38:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104959560' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb  2 04:38:39 np0005604790 friendly_bouman[80773]: 
Feb  2 04:38:39 np0005604790 friendly_bouman[80773]: {"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":53,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T09:37:43:907997+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-02T09:37:43.909949+0000","services":{}},"progress_events":{}}
Feb  2 04:38:39 np0005604790 systemd[1]: libpod-27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab.scope: Deactivated successfully.
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:39.494350989 +0000 UTC m=+0.603324141 container died 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:39 np0005604790 systemd[1]: var-lib-containers-storage-overlay-75c30aea9053d975834c205d91baab45a4a74fc0542644b6c13cb92014a8bfb2-merged.mount: Deactivated successfully.
Feb  2 04:38:39 np0005604790 podman[80757]: 2026-02-02 09:38:39.533247826 +0000 UTC m=+0.642220978 container remove 27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab (image=quay.io/ceph/ceph:v19, name=friendly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:38:39 np0005604790 systemd[1]: libpod-conmon-27651a1e80fd185b281d8bf1da409f7694eed221e59a84022b71f18d690d92ab.scope: Deactivated successfully.
Feb  2 04:38:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:50 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:38:50 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:38:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:38:51 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:38:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:52.930+0000 7f091e4b8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev ad3f49fa-be0a-4d3c-b8eb-3b13d1945bb2 (Updating crash deployment (+1 -> 2))
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: service_name: mon
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: placement:
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  hosts:
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-0
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-1
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-2
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:38:52.931+0000 7f091e4b8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: service_name: mgr
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: placement:
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  hosts:
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-0
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-1
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  - compute-2
Feb  2 04:38:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Feb  2 04:38:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: Deploying daemon crash.compute-1 on compute-1
Feb  2 04:38:53 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Feb  2 04:38:54 np0005604790 ceph-mon[74489]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Feb  2 04:38:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev ad3f49fa-be0a-4d3c-b8eb-3b13d1945bb2 (Updating crash deployment (+1 -> 2))
Feb  2 04:38:55 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event ad3f49fa-be0a-4d3c-b8eb-3b13d1945bb2 (Updating crash deployment (+1 -> 2)) in 2 seconds
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.61216957 +0000 UTC m=+0.046357277 container create 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:55 np0005604790 systemd[1]: Started libpod-conmon-9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0.scope.
Feb  2 04:38:55 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.588493389 +0000 UTC m=+0.022681176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.694014981 +0000 UTC m=+0.128202708 container init 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.69809855 +0000 UTC m=+0.132286267 container start 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:55 np0005604790 vigorous_edison[80918]: 167 167
Feb  2 04:38:55 np0005604790 systemd[1]: libpod-9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0.scope: Deactivated successfully.
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.701333436 +0000 UTC m=+0.135521163 container attach 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.701765098 +0000 UTC m=+0.135952815 container died 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:38:55 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7bcc045e6a0b203ec3469665b3d4ae46dbf2e5be4595db4dfda0b74a1278cecb-merged.mount: Deactivated successfully.
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:38:55 np0005604790 podman[80901]: 2026-02-02 09:38:55.821706644 +0000 UTC m=+0.255894381 container remove 9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:38:55 np0005604790 systemd[1]: libpod-conmon-9cda1bc9196cd252e1bca7f68b18fdef4a6220c4a95a0c370a1e70f343c5d6a0.scope: Deactivated successfully.
Feb  2 04:38:55 np0005604790 podman[80943]: 2026-02-02 09:38:55.942840253 +0000 UTC m=+0.044632561 container create 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:38:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:38:55 np0005604790 systemd[1]: Started libpod-conmon-1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f.scope.
Feb  2 04:38:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:38:56 np0005604790 podman[80943]: 2026-02-02 09:38:55.915800972 +0000 UTC m=+0.017593300 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:38:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:38:56 np0005604790 podman[80943]: 2026-02-02 09:38:56.036324485 +0000 UTC m=+0.138116773 container init 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:38:56 np0005604790 podman[80943]: 2026-02-02 09:38:56.048453088 +0000 UTC m=+0.150245386 container start 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:38:56 np0005604790 podman[80943]: 2026-02-02 09:38:56.052193288 +0000 UTC m=+0.153985586 container attach 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:38:56 np0005604790 suspicious_proskuriakova[80960]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:38:56 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:38:56 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:38:56 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fabfc705-a3af-416c-81a4-3fd4d777fb5f
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "273baa6d-671d-41d3-8896-5eac2274aa10"} v 0)
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/908444544' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "273baa6d-671d-41d3-8896-5eac2274aa10"}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/908444544' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "273baa6d-671d-41d3-8896-5eac2274aa10"}]': finished
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f"} v 0)
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/224206128' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f"}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/224206128' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f"}]': finished
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:38:56 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:38:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/908444544' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "273baa6d-671d-41d3-8896-5eac2274aa10"}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/908444544' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "273baa6d-671d-41d3-8896-5eac2274aa10"}]': finished
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/224206128' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f"}]: dispatch
Feb  2 04:38:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/224206128' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f"}]': finished
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb  2 04:38:57 np0005604790 lvm[81021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:38:57 np0005604790 lvm[81021]: VG ceph_vg0 finished
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/380622187' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434230814' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: stderr: got monmap epoch 1
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: --> Creating keyring file for osd.1
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb  2 04:38:57 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid fabfc705-a3af-416c-81a4-3fd4d777fb5f --setuser ceph --setgroup ceph
Feb  2 04:38:57 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 2 completed events
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:38:57 np0005604790 ceph-mon[74489]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb  2 04:38:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:00 np0005604790 suspicious_proskuriakova[80960]: stderr: 2026-02-02T09:38:57.609+0000 7f84e3023740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb  2 04:39:00 np0005604790 suspicious_proskuriakova[80960]: stderr: 2026-02-02T09:38:57.871+0000 7f84e3023740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb  2 04:39:00 np0005604790 suspicious_proskuriakova[80960]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb  2 04:39:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:00 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:00 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 04:39:01 np0005604790 suspicious_proskuriakova[80960]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb  2 04:39:01 np0005604790 systemd[1]: libpod-1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f.scope: Deactivated successfully.
Feb  2 04:39:01 np0005604790 systemd[1]: libpod-1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f.scope: Consumed 2.090s CPU time.
Feb  2 04:39:01 np0005604790 podman[81938]: 2026-02-02 09:39:01.421237932 +0000 UTC m=+0.054373291 container died 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:39:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b71751bc79189f9356b9479f296974fd309f8195da20f9a947b1748f1ee9d30b-merged.mount: Deactivated successfully.
Feb  2 04:39:01 np0005604790 podman[81938]: 2026-02-02 09:39:01.497760611 +0000 UTC m=+0.130895970 container remove 1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:39:01 np0005604790 systemd[1]: libpod-conmon-1a6660b6df1b5c067369b469e35ed5386e0ab71527b43ab742318ff17c937e5f.scope: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.095169415 +0000 UTC m=+0.044572969 container create 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:39:02 np0005604790 systemd[1]: Started libpod-conmon-413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426.scope.
Feb  2 04:39:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.076839156 +0000 UTC m=+0.026242730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.185928624 +0000 UTC m=+0.135332168 container init 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.194952334 +0000 UTC m=+0.144355888 container start 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.198986282 +0000 UTC m=+0.148389806 container attach 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:39:02 np0005604790 jovial_kalam[82058]: 167 167
Feb  2 04:39:02 np0005604790 systemd[1]: libpod-413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426.scope: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.203249575 +0000 UTC m=+0.152653099 container died 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:39:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-09604a6bd12b5723b49b3ef118fa895fb1080b33f8652d930f2417bd90d88b1c-merged.mount: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82042]: 2026-02-02 09:39:02.245000408 +0000 UTC m=+0.194403972 container remove 413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:39:02 np0005604790 systemd[1]: libpod-conmon-413f5a652bbd66b247bed14abe40ea478b67637241aad38c879187fa81d14426.scope: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.443213261 +0000 UTC m=+0.064446129 container create 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:39:02
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] No pools available
Feb  2 04:39:02 np0005604790 systemd[1]: Started libpod-conmon-15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83.scope.
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.416818238 +0000 UTC m=+0.038051166 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a619f26c98df5a5c8f59f11c88bbc88e786787348deac1d0072d076de7cbeef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a619f26c98df5a5c8f59f11c88bbc88e786787348deac1d0072d076de7cbeef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a619f26c98df5a5c8f59f11c88bbc88e786787348deac1d0072d076de7cbeef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a619f26c98df5a5c8f59f11c88bbc88e786787348deac1d0072d076de7cbeef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.570377821 +0000 UTC m=+0.191610679 container init 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.581947839 +0000 UTC m=+0.203180717 container start 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.587568129 +0000 UTC m=+0.208800977 container attach 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]: {
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:    "1": [
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:        {
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "devices": [
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "/dev/loop3"
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            ],
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "lv_name": "ceph_lv0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "lv_size": "21470642176",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "name": "ceph_lv0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "tags": {
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.cluster_name": "ceph",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.crush_device_class": "",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.encrypted": "0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.osd_id": "1",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.type": "block",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.vdo": "0",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:                "ceph.with_tpm": "0"
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            },
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "type": "block",
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:            "vg_name": "ceph_vg0"
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:        }
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]:    ]
Feb  2 04:39:02 np0005604790 ecstatic_perlman[82097]: }
Feb  2 04:39:02 np0005604790 systemd[1]: libpod-15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83.scope: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.882946632 +0000 UTC m=+0.504179470 container died 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:39:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1a619f26c98df5a5c8f59f11c88bbc88e786787348deac1d0072d076de7cbeef-merged.mount: Deactivated successfully.
Feb  2 04:39:02 np0005604790 podman[82081]: 2026-02-02 09:39:02.92639819 +0000 UTC m=+0.547631028 container remove 15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:39:02 np0005604790 systemd[1]: libpod-conmon-15839c00cdd80381d2ca28cd03e19d3816e710eb548ac4df80af55ec3d7aec83.scope: Deactivated successfully.
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb  2 04:39:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb  2 04:39:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb  2 04:39:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.519056656 +0000 UTC m=+0.062286291 container create deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 04:39:03 np0005604790 systemd[1]: Started libpod-conmon-deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393.scope.
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.481918487 +0000 UTC m=+0.025148192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:03 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.615062585 +0000 UTC m=+0.158292250 container init deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.623009907 +0000 UTC m=+0.166239522 container start deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.627549348 +0000 UTC m=+0.170779213 container attach deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:03 np0005604790 lucid_franklin[82226]: 167 167
Feb  2 04:39:03 np0005604790 systemd[1]: libpod-deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393.scope: Deactivated successfully.
Feb  2 04:39:03 np0005604790 conmon[82226]: conmon deee4fee7a3634ad9da2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393.scope/container/memory.events
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.631864443 +0000 UTC m=+0.175094058 container died deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9ab0c3daa239810b10ea254700249f0aceded43194bc28f19b5b89cb4392483f-merged.mount: Deactivated successfully.
Feb  2 04:39:03 np0005604790 podman[82210]: 2026-02-02 09:39:03.672542447 +0000 UTC m=+0.215772102 container remove deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_franklin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:39:03 np0005604790 systemd[1]: libpod-conmon-deee4fee7a3634ad9da21ba203bd037330d28750188955610c8f5dbca59d2393.scope: Deactivated successfully.
Feb  2 04:39:03 np0005604790 podman[82256]: 2026-02-02 09:39:03.932850576 +0000 UTC m=+0.063069452 container create 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:03 np0005604790 systemd[1]: Started libpod-conmon-82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605.scope.
Feb  2 04:39:03 np0005604790 podman[82256]: 2026-02-02 09:39:03.905397524 +0000 UTC m=+0.035616480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:04 np0005604790 ceph-mon[74489]: Deploying daemon osd.0 on compute-1
Feb  2 04:39:04 np0005604790 ceph-mon[74489]: Deploying daemon osd.1 on compute-0
Feb  2 04:39:04 np0005604790 podman[82256]: 2026-02-02 09:39:04.049833333 +0000 UTC m=+0.180052239 container init 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:04 np0005604790 podman[82256]: 2026-02-02 09:39:04.061861943 +0000 UTC m=+0.192080859 container start 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:39:04 np0005604790 podman[82256]: 2026-02-02 09:39:04.065408528 +0000 UTC m=+0.195627474 container attach 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 04:39:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test[82272]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb  2 04:39:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test[82272]:                            [--no-systemd] [--no-tmpfs]
Feb  2 04:39:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test[82272]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb  2 04:39:04 np0005604790 systemd[1]: libpod-82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605.scope: Deactivated successfully.
Feb  2 04:39:04 np0005604790 podman[82256]: 2026-02-02 09:39:04.284474817 +0000 UTC m=+0.414693733 container died 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:39:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-99ca17d33ca0f6eb25ee547862d6e300dc4f8b091c2484174c2b46d0364d707c-merged.mount: Deactivated successfully.
Feb  2 04:39:04 np0005604790 podman[82256]: 2026-02-02 09:39:04.406579872 +0000 UTC m=+0.536798778 container remove 82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:04 np0005604790 systemd[1]: libpod-conmon-82fe43282f2a466ffc225e7b555cdd9b59ea4b83ae7a7a03088460dce0c42605.scope: Deactivated successfully.
Feb  2 04:39:04 np0005604790 systemd[1]: Reloading.
Feb  2 04:39:04 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:39:04 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:39:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:05 np0005604790 systemd[1]: Reloading.
Feb  2 04:39:05 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:39:05 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:39:05 np0005604790 systemd[1]: Starting Ceph osd.1 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:39:05 np0005604790 podman[82436]: 2026-02-02 09:39:05.606382911 +0000 UTC m=+0.065892927 container create 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:05 np0005604790 podman[82436]: 2026-02-02 09:39:05.575772875 +0000 UTC m=+0.035282911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:05 np0005604790 podman[82436]: 2026-02-02 09:39:05.731459975 +0000 UTC m=+0.190970021 container init 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:39:05 np0005604790 podman[82436]: 2026-02-02 09:39:05.741846832 +0000 UTC m=+0.201356818 container start 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:05 np0005604790 podman[82436]: 2026-02-02 09:39:05.751813937 +0000 UTC m=+0.211323993 container attach 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 04:39:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:05 np0005604790 bash[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:05 np0005604790 bash[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:06 np0005604790 lvm[82532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:39:06 np0005604790 lvm[82532]: VG ceph_vg0 finished
Feb  2 04:39:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:06 np0005604790 bash[82436]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb  2 04:39:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 04:39:06 np0005604790 bash[82436]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb  2 04:39:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:07 np0005604790 bash[82436]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb  2 04:39:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate[82451]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 04:39:07 np0005604790 bash[82436]: --> ceph-volume lvm activate successful for osd ID: 1
Feb  2 04:39:07 np0005604790 systemd[1]: libpod-4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc.scope: Deactivated successfully.
Feb  2 04:39:07 np0005604790 systemd[1]: libpod-4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc.scope: Consumed 1.415s CPU time.
Feb  2 04:39:07 np0005604790 podman[82436]: 2026-02-02 09:39:07.048739826 +0000 UTC m=+1.508249842 container died 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:39:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3f33058c3ac67972ccb27a6d12b36dfa94e62171d2f865eafdef19af34f616eb-merged.mount: Deactivated successfully.
Feb  2 04:39:07 np0005604790 podman[82436]: 2026-02-02 09:39:07.115173496 +0000 UTC m=+1.574683462 container remove 4e4428c585ecd24fb75a6365cc82387f6d550920c1f6278dd50c35a5d008c8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1-activate, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:07 np0005604790 podman[82686]: 2026-02-02 09:39:07.33704384 +0000 UTC m=+0.053144388 container create 4642ed65ea9037166532825913a80a3f5fa996c66d25a8d6ec32643bd7f52763 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:39:07 np0005604790 podman[82686]: 2026-02-02 09:39:07.310895863 +0000 UTC m=+0.026996481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c95a6d4b51c4ce08cde680f8c6be7628c5fd815f25ceff02daecc48343cc8d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c95a6d4b51c4ce08cde680f8c6be7628c5fd815f25ceff02daecc48343cc8d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c95a6d4b51c4ce08cde680f8c6be7628c5fd815f25ceff02daecc48343cc8d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c95a6d4b51c4ce08cde680f8c6be7628c5fd815f25ceff02daecc48343cc8d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c95a6d4b51c4ce08cde680f8c6be7628c5fd815f25ceff02daecc48343cc8d1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:07 np0005604790 podman[82686]: 2026-02-02 09:39:07.443895608 +0000 UTC m=+0.159996236 container init 4642ed65ea9037166532825913a80a3f5fa996c66d25a8d6ec32643bd7f52763 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:07 np0005604790 podman[82686]: 2026-02-02 09:39:07.458422285 +0000 UTC m=+0.174522863 container start 4642ed65ea9037166532825913a80a3f5fa996c66d25a8d6ec32643bd7f52763 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:39:07 np0005604790 bash[82686]: 4642ed65ea9037166532825913a80a3f5fa996c66d25a8d6ec32643bd7f52763
Feb  2 04:39:07 np0005604790 systemd[1]: Started Ceph osd.1 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: pidfile_write: ignore empty --pid-file
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:07 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.201556632 +0000 UTC m=+0.053887788 container create 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:08 np0005604790 systemd[1]: Started libpod-conmon-8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226.scope.
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.179019401 +0000 UTC m=+0.031350597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.299696787 +0000 UTC m=+0.152027953 container init 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.30655011 +0000 UTC m=+0.158881256 container start 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.311168673 +0000 UTC m=+0.163499839 container attach 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:39:08 np0005604790 serene_allen[82828]: 167 167
Feb  2 04:39:08 np0005604790 systemd[1]: libpod-8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226.scope: Deactivated successfully.
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.313733412 +0000 UTC m=+0.166064558 container died 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 04:39:08 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0f9df155c4a1a0c583fd994bad45b9038c239cbf4af8129bf892c157e4e5c17c-merged.mount: Deactivated successfully.
Feb  2 04:39:08 np0005604790 podman[82812]: 2026-02-02 09:39:08.347582544 +0000 UTC m=+0.199913730 container remove 8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:08 np0005604790 systemd[1]: libpod-conmon-8a14c60cccf9ebeff1fd766c23c9160d6e446dfab45a3f3868764e6b339e3226.scope: Deactivated successfully.
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:08 np0005604790 podman[82853]: 2026-02-02 09:39:08.491213552 +0000 UTC m=+0.042379601 container create a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:39:08 np0005604790 systemd[1]: Started libpod-conmon-a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b.scope.
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:08 np0005604790 podman[82853]: 2026-02-02 09:39:08.469548155 +0000 UTC m=+0.020714224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512d7396be1c446096d65134d3cb0d0ae938bd9ab7847eb36d09b8ad55f8ec37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512d7396be1c446096d65134d3cb0d0ae938bd9ab7847eb36d09b8ad55f8ec37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512d7396be1c446096d65134d3cb0d0ae938bd9ab7847eb36d09b8ad55f8ec37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512d7396be1c446096d65134d3cb0d0ae938bd9ab7847eb36d09b8ad55f8ec37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:08 np0005604790 podman[82853]: 2026-02-02 09:39:08.598256485 +0000 UTC m=+0.149422634 container init a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:08 np0005604790 podman[82853]: 2026-02-02 09:39:08.61196349 +0000 UTC m=+0.163129539 container start a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:39:08 np0005604790 podman[82853]: 2026-02-02 09:39:08.615564967 +0000 UTC m=+0.166731046 container attach a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cfc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cfc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cfc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cfc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cfc00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb  2 04:39:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Feb  2 04:39:08 np0005604790 ceph-osd[82705]: bdev(0x564f130cf800 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: load: jerasure load: lrc 
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:09 np0005604790 lvm[82954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:39:09 np0005604790 lvm[82954]: VG ceph_vg0 finished
Feb  2 04:39:09 np0005604790 adoring_mcclintock[82870]: {}
Feb  2 04:39:09 np0005604790 systemd[1]: libpod-a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b.scope: Deactivated successfully.
Feb  2 04:39:09 np0005604790 systemd[1]: libpod-a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b.scope: Consumed 1.126s CPU time.
Feb  2 04:39:09 np0005604790 conmon[82870]: conmon a0b8bcd65d26180ee2c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b.scope/container/memory.events
Feb  2 04:39:09 np0005604790 podman[82853]: 2026-02-02 09:39:09.328955521 +0000 UTC m=+0.880121610 container died a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-512d7396be1c446096d65134d3cb0d0ae938bd9ab7847eb36d09b8ad55f8ec37-merged.mount: Deactivated successfully.
Feb  2 04:39:09 np0005604790 podman[82853]: 2026-02-02 09:39:09.381727338 +0000 UTC m=+0.932893417 container remove a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mcclintock, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:09 np0005604790 systemd[1]: libpod-conmon-a0b8bcd65d26180ee2c7a443bd6f647ba3108dcb9fb439957b770b625e439f8b.scope: Deactivated successfully.
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:09 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:09 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:09 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f74c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount shared_bdev_used = 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: RocksDB version: 7.9.2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Git sha 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Compile date 2025-07-17 03:12:14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DB SUMMARY
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DB Session ID:  59F5OJHYUC5OL4JHWNW4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: CURRENT file:  CURRENT
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.error_if_exists: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.create_if_missing: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                     Options.env: 0x564f13f45dc0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                Options.info_log: 0x564f13f497a0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.statistics: (nil)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.use_fsync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.db_log_dir: 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.write_buffer_manager: 0x564f1403ea00
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.unordered_write: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.row_cache: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.wal_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.two_write_queues: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.wal_compression: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.atomic_flush: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_background_jobs: 4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_background_compactions: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_subcompactions: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.max_open_files: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Compression algorithms supported:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZSTD supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kXpressCompression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZlibCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d797f890-90b1-4b40-b8a0-573f24c2c56f
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150334271, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150334576, "job": 1, "event": "recovery_finished"}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: freelist init
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: freelist _read_cfg
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs umount
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) close
Feb  2 04:39:10 np0005604790 podman[83135]: 2026-02-02 09:39:10.405209607 +0000 UTC m=+0.077833545 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 04:39:10 np0005604790 podman[83135]: 2026-02-02 09:39:10.528118623 +0000 UTC m=+0.200742561 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bdev(0x564f13f75000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluefs mount shared_bdev_used = 4718592
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: RocksDB version: 7.9.2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Git sha 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Compile date 2025-07-17 03:12:14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DB SUMMARY
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DB Session ID:  59F5OJHYUC5OL4JHWNW5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: CURRENT file:  CURRENT
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: IDENTITY file:  IDENTITY
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.error_if_exists: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.create_if_missing: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.paranoid_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                     Options.env: 0x564f140e2310
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                Options.info_log: 0x564f13f49920
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_file_opening_threads: 16
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.statistics: (nil)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.use_fsync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.max_log_file_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.keep_log_file_num: 1000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.recycle_log_file_num: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.allow_fallocate: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.allow_mmap_reads: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.allow_mmap_writes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.use_direct_reads: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.create_missing_column_families: 0
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.db_log_dir: 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                                 Options.wal_dir: db.wal
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.table_cache_numshardbits: 6
Feb  2 04:39:10 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:10 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.advise_random_on_open: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.db_write_buffer_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.write_buffer_manager: 0x564f1403ea00
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                            Options.rate_limiter: (nil)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.wal_recovery_mode: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.enable_thread_tracking: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.enable_pipelined_write: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.unordered_write: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.row_cache: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                              Options.wal_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.allow_ingest_behind: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.two_write_queues: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.manual_wal_flush: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.wal_compression: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.atomic_flush: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.log_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.best_efforts_recovery: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.allow_data_in_errors: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.db_host_id: __hostname__
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.enforce_single_del_contracts: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_background_jobs: 4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_background_compactions: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_subcompactions: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.delayed_write_rate : 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.max_open_files: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.max_background_flushes: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Compression algorithms supported:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZSTD supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kXpressCompression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kBZip2Compression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kLZ4Compression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kZlibCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kLZ4HCCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: #011kSnappyCompression supported: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Fast CRC32 supported: Supported on x86
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DMutex implementation: pthread_mutex_t
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f13165350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:10 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:           Options.merge_operator: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.compaction_filter_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.sst_partitioner_factory: None
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.table_factory: BlockBasedTable
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564f13f49ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564f131649b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.write_buffer_size: 16777216
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.max_write_buffer_number: 64
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.compression: LZ4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression: Disabled
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.num_levels: 7
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:            Options.compression_opts.window_bits: -14
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.level: 32767
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.compression_opts.strategy: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                  Options.compression_opts.enabled: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.target_file_size_base: 67108864
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:             Options.target_file_size_multiplier: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.arena_block_size: 1048576
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.disable_auto_compactions: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.inplace_update_support: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:   Options.memtable_huge_page_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.bloom_locality: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                    Options.max_successive_merges: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.paranoid_file_checks: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.force_consistency_checks: 1
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.report_bg_io_stats: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                               Options.ttl: 2592000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                       Options.enable_blob_files: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                           Options.min_blob_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                          Options.blob_file_size: 268435456
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb:                Options.blob_file_starting_level: 0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d797f890-90b1-4b40-b8a0-573f24c2c56f
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150618278, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150623961, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025150, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d797f890-90b1-4b40-b8a0-573f24c2c56f", "db_session_id": "59F5OJHYUC5OL4JHWNW5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150634248, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025150, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d797f890-90b1-4b40-b8a0-573f24c2c56f", "db_session_id": "59F5OJHYUC5OL4JHWNW5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150640635, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025150, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d797f890-90b1-4b40-b8a0-573f24c2c56f", "db_session_id": "59F5OJHYUC5OL4JHWNW5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025150642506, "job": 1, "event": "recovery_finished"}
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564f14146000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: DB pointer 0x564f140f0000
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000206 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.000206 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 460.80 MB us
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: _get_class not permitted to load lua
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: _get_class not permitted to load sdk
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 load_pgs
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 load_pgs opened 0 pgs
Feb  2 04:39:10 np0005604790 ceph-osd[82705]: osd.1 0 log_to_monitors true
Feb  2 04:39:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1[82701]: 2026-02-02T09:39:10.705+0000 7f75a44f9740 -1 osd.1 0 log_to_monitors true
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:10 np0005604790 python3[83638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.003603927 +0000 UTC m=+0.049259934 container create 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:39:11 np0005604790 systemd[1]: Started libpod-conmon-3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470.scope.
Feb  2 04:39:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5e92fa4f92565202a705cbcf24e8d7b8a5b036549e6c00e4cae16c7fb61f03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5e92fa4f92565202a705cbcf24e8d7b8a5b036549e6c00e4cae16c7fb61f03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5e92fa4f92565202a705cbcf24e8d7b8a5b036549e6c00e4cae16c7fb61f03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.068296811 +0000 UTC m=+0.113952788 container init 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:10.977297236 +0000 UTC m=+0.022953253 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.075860423 +0000 UTC m=+0.121516410 container start 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.0791491 +0000 UTC m=+0.124805097 container attach 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.452548022 +0000 UTC m=+0.069144424 container create 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:11 np0005604790 systemd[1]: Started libpod-conmon-097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25.scope.
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.425961533 +0000 UTC m=+0.042558005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299804201' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.527427498 +0000 UTC m=+0.144023900 container init 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:39:11 np0005604790 brave_taussig[83708]: 
Feb  2 04:39:11 np0005604790 brave_taussig[83708]: {"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":85,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":7,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1770025136,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-02T09:37:43:907997+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T09:39:04.935631+0000","services":{}},"progress_events":{}}
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.53239269 +0000 UTC m=+0.148989082 container start 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:39:11 np0005604790 youthful_galileo[83790]: 167 167
Feb  2 04:39:11 np0005604790 systemd[1]: libpod-097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25.scope: Deactivated successfully.
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.545682724 +0000 UTC m=+0.162279106 container attach 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.545966952 +0000 UTC m=+0.162563334 container died 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:11 np0005604790 systemd[1]: libpod-3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470.scope: Deactivated successfully.
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.56652343 +0000 UTC m=+0.612179417 container died 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-725a2fe177c604afa2d86b99a76b499300088401b038c1db22ba402aacda62a2-merged.mount: Deactivated successfully.
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:11 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:11 np0005604790 podman[83772]: 2026-02-02 09:39:11.62619417 +0000 UTC m=+0.242790552 container remove 097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:39:11 np0005604790 systemd[1]: libpod-conmon-097b24bb17b53fec345fe62c621106cdecf5087775ee4150df26aba835aadf25.scope: Deactivated successfully.
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7b5e92fa4f92565202a705cbcf24e8d7b8a5b036549e6c00e4cae16c7fb61f03-merged.mount: Deactivated successfully.
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:11 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:11 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:11 np0005604790 podman[83656]: 2026-02-02 09:39:11.690747331 +0000 UTC m=+0.736403338 container remove 3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470 (image=quay.io/ceph/ceph:v19, name=brave_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:11 np0005604790 systemd[1]: libpod-conmon-3e710515889940de2a98772f90964dde3b6f313a4ffac6b5c360b216b6d55470.scope: Deactivated successfully.
Feb  2 04:39:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb  2 04:39:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb  2 04:39:11 np0005604790 podman[83828]: 2026-02-02 09:39:11.806767283 +0000 UTC m=+0.058571962 container create d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 04:39:11 np0005604790 systemd[1]: Started libpod-conmon-d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f.scope.
Feb  2 04:39:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef80abafe3f612f3042a1ad932b874bf0daa8478324d10acbe0fb1e7c98e6d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef80abafe3f612f3042a1ad932b874bf0daa8478324d10acbe0fb1e7c98e6d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef80abafe3f612f3042a1ad932b874bf0daa8478324d10acbe0fb1e7c98e6d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef80abafe3f612f3042a1ad932b874bf0daa8478324d10acbe0fb1e7c98e6d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:11 np0005604790 podman[83828]: 2026-02-02 09:39:11.787279224 +0000 UTC m=+0.039083933 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:11 np0005604790 podman[83828]: 2026-02-02 09:39:11.897672086 +0000 UTC m=+0.149476785 container init d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:11 np0005604790 podman[83828]: 2026-02-02 09:39:11.907618891 +0000 UTC m=+0.159423590 container start d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 04:39:11 np0005604790 podman[83828]: 2026-02-02 09:39:11.915890092 +0000 UTC m=+0.167694801 container attach d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:39:12 np0005604790 charming_cohen[83844]: [
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:    {
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "available": false,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "being_replaced": false,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "ceph_device_lvm": false,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "lsm_data": {},
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "lvs": [],
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "path": "/dev/sr0",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "rejected_reasons": [
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "Has a FileSystem",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "Insufficient space (<5GB)"
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        ],
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        "sys_api": {
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "actuators": null,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "device_nodes": [
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:                "sr0"
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            ],
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "devname": "sr0",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "human_readable_size": "482.00 KB",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "id_bus": "ata",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "model": "QEMU DVD-ROM",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "nr_requests": "2",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "parent": "/dev/sr0",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "partitions": {},
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "path": "/dev/sr0",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "removable": "1",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "rev": "2.5+",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "ro": "0",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "rotational": "1",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "sas_address": "",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "sas_device_handle": "",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "scheduler_mode": "mq-deadline",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "sectors": 0,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "sectorsize": "2048",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "size": 493568.0,
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "support_discard": "2048",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "type": "disk",
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:            "vendor": "QEMU"
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:        }
Feb  2 04:39:12 np0005604790 charming_cohen[83844]:    }
Feb  2 04:39:12 np0005604790 charming_cohen[83844]: ]
Feb  2 04:39:12 np0005604790 systemd[1]: libpod-d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f.scope: Deactivated successfully.
Feb  2 04:39:12 np0005604790 podman[83828]: 2026-02-02 09:39:12.598062874 +0000 UTC m=+0.849867553 container died d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8ef80abafe3f612f3042a1ad932b874bf0daa8478324d10acbe0fb1e7c98e6d2-merged.mount: Deactivated successfully.
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 done with init, starting boot process
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 start_boot
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb  2 04:39:12 np0005604790 ceph-osd[82705]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:12 np0005604790 podman[83828]: 2026-02-02 09:39:12.680601264 +0000 UTC m=+0.932405953 container remove d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3795740271; not ready for session (expect reconnect)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:12 np0005604790 systemd[1]: libpod-conmon-d3cf503a6fbe0ad16ca8903c8f29a28666973263ced26a33431e27b5d00c634f.scope: Deactivated successfully.
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Feb  2 04:39:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Feb  2 04:39:12 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Feb  2 04:39:13 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:13 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: Adjusting osd_memory_target on compute-1 to  5247M
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: Adjusting osd_memory_target on compute-0 to 127.9M
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Feb  2 04:39:13 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3795740271; not ready for session (expect reconnect)
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:13 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:14 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:14 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:14 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3795740271; not ready for session (expect reconnect)
Feb  2 04:39:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:14 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb  2 04:39:15 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3783871040; not ready for session (expect reconnect)
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:15 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Feb  2 04:39:15 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3795740271; not ready for session (expect reconnect)
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040] boot
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:15 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: OSD bench result of 4998.717013 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 04:39:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.065 iops: 7696.745 elapsed_sec: 0.390
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [WRN] : OSD bench result of 7696.745182 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 0 waiting for initial osdmap
Feb  2 04:39:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1[82701]: 2026-02-02T09:39:15.959+0000 7f75a0c8f640 -1 osd.1 0 waiting for initial osdmap
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 check_osdmap_features require_osd_release unknown -> squid
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 set_numa_affinity not setting numa affinity
Feb  2 04:39:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-osd-1[82701]: 2026-02-02T09:39:15.980+0000 7f759baa4640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb  2 04:39:15 np0005604790 ceph-osd[82705]: osd.1 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb  2 04:39:16 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] creating mgr pool
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:39:16 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3795740271; not ready for session (expect reconnect)
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:16 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271] boot
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: osd.0 [v2:192.168.122.101:6800/3783871040,v1:192.168.122.101:6801/3783871040] boot
Feb  2 04:39:16 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:39:16 np0005604790 ceph-osd[82705]: osd.1 11 state: booting -> active
Feb  2 04:39:16 np0005604790 ceph-osd[82705]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb  2 04:39:16 np0005604790 ceph-osd[82705]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb  2 04:39:16 np0005604790 ceph-osd[82705]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb  2 04:39:16 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:39:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Feb  2 04:39:17 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: OSD bench result of 7696.745182 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: osd.1 [v2:192.168.122.100:6802/3795740271,v1:192.168.122.100:6803/3795740271] boot
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:39:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb  2 04:39:17 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] creating main.db for devicehealth
Feb  2 04:39:18 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb  2 04:39:18 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.djvyfo(active, since 76s)
Feb  2 04:39:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:19 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:39:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:20 np0005604790 ceph-mon[74489]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:39:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:39:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:39:28 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:39:28 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:39:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:39:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:39:29 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:39:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:39:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 55a5be7e-f68d-4ab4-89ad-161b538ce870 (Updating mon deployment (+2 -> 3))
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Feb  2 04:39:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: Deploying daemon mon.compute-2 on compute-2
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Feb  2 04:39:31 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:39:32 np0005604790 ceph-mon[74489]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Feb  2 04:39:32 np0005604790 ceph-mon[74489]: Cluster is now healthy
Feb  2 04:39:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Feb  2 04:39:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Feb  2 04:39:33 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Feb  2 04:39:33 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:34 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:34 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb  2 04:39:35 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:35 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb  2 04:39:35 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:35 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:36 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:36 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb  2 04:39:36 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:36 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb  2 04:39:37 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:37 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb  2 04:39:37 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:37 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2371198897; not ready for session (expect reconnect)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : monmap epoch 2
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T09:39:33.475774+0000
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : created 2026-02-02T09:37:41.899871+0000
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.djvyfo(active, since 96s)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 55a5be7e-f68d-4ab4-89ad-161b538ce870 (Updating mon deployment (+2 -> 3))
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 55a5be7e-f68d-4ab4-89ad-161b538ce870 (Updating mon deployment (+2 -> 3)) in 8 seconds
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: Deploying daemon mon.compute-1 on compute-1
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0 calling monitor election
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-2 calling monitor election
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: overall HEALTH_OK
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev eb405306-abb7-4025-ae1b-73c58d017a15 (Updating mgr deployment (+2 -> 3))
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.gzlyac on compute-2
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.gzlyac on compute-2
Feb  2 04:39:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Feb  2 04:39:39 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:39 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:39:39 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: paxos.0).electionLogic(10) init, last seen epoch 10
Feb  2 04:39:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:39:39.478+0000 7f092c4d4640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Feb  2 04:39:39 np0005604790 ceph-mgr[74785]: mgr.server handle_report got status from non-daemon mon.compute-2
Feb  2 04:39:40 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:40 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:41 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:41 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:42 np0005604790 python3[84942]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:42 np0005604790 podman[84944]: 2026-02-02 09:39:42.098619042 +0000 UTC m=+0.054401522 container create e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:39:42 np0005604790 systemd[1]: Started libpod-conmon-e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef.scope.
Feb  2 04:39:42 np0005604790 podman[84944]: 2026-02-02 09:39:42.071434447 +0000 UTC m=+0.027216987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:42 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e519ccadd2c57b49b04f873f58486f8245894afd68dd097f503aefb9f52e4ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e519ccadd2c57b49b04f873f58486f8245894afd68dd097f503aefb9f52e4ea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:42 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e519ccadd2c57b49b04f873f58486f8245894afd68dd097f503aefb9f52e4ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:42 np0005604790 podman[84944]: 2026-02-02 09:39:42.200600162 +0000 UTC m=+0.156382652 container init e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:39:42 np0005604790 podman[84944]: 2026-02-02 09:39:42.20992021 +0000 UTC m=+0.165702680 container start e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:42 np0005604790 podman[84944]: 2026-02-02 09:39:42.214538553 +0000 UTC m=+0.170321153 container attach e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:39:42 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:42 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:42 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 3 completed events
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:39:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:43 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:43 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Feb  2 04:39:44 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:44 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : monmap epoch 3
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : last_changed 2026-02-02T09:39:39.266649+0000
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : created 2026-02-02T09:37:41.899871+0000
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.djvyfo(active, since 101s)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0 calling monitor election
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-2 calling monitor election
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-1 calling monitor election
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: overall HEALTH_OK
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.teascl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.teascl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.teascl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:44 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.teascl on compute-1
Feb  2 04:39:44 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.teascl on compute-1
Feb  2 04:39:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:45 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4086078947; not ready for session (expect reconnect)
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.teascl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.teascl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: Deploying daemon mgr.compute-1.teascl on compute-1
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8543329' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb  2 04:39:45 np0005604790 gallant_mendeleev[84960]: 
Feb  2 04:39:45 np0005604790 gallant_mendeleev[84960]: {"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":1,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1770025156,"num_in_osds":2,"osd_in_since":1770025136,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":55795712,"bytes_avail":42885488640,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-02-02T09:37:43:907997+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-02T09:39:04.935631+0000","services":{}},"progress_events":{"eb405306-abb7-4025-ae1b-73c58d017a15":{"message":"Updating mgr deployment (+2 -> 3) (5s)\n      [==============..............] (remaining: 5s)","progress":0.5,"add_to_ceph_s":true}}}
Feb  2 04:39:45 np0005604790 systemd[1]: libpod-e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef.scope: Deactivated successfully.
Feb  2 04:39:45 np0005604790 podman[84944]: 2026-02-02 09:39:45.712830761 +0000 UTC m=+3.668613231 container died e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2e519ccadd2c57b49b04f873f58486f8245894afd68dd097f503aefb9f52e4ea-merged.mount: Deactivated successfully.
Feb  2 04:39:45 np0005604790 podman[84944]: 2026-02-02 09:39:45.753394713 +0000 UTC m=+3.709177183 container remove e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef (image=quay.io/ceph/ceph:v19, name=gallant_mendeleev, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:45 np0005604790 systemd[1]: libpod-conmon-e9d60fe71e2081f07b1ef72667485a6aabf3b40d96903b41e062694cafc314ef.scope: Deactivated successfully.
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:39:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev eb405306-abb7-4025-ae1b-73c58d017a15 (Updating mgr deployment (+2 -> 3))
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event eb405306-abb7-4025-ae1b-73c58d017a15 (Updating mgr deployment (+2 -> 3)) in 7 seconds
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 1c62b8c5-3166-49a0-ba21-a5f1f01c18e5 (Updating crash deployment (+1 -> 3))
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Feb  2 04:39:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:39:46.269+0000 7f092c4d4640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: mgr.server handle_report got status from non-daemon mon.compute-1
Feb  2 04:39:46 np0005604790 python3[85022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:46 np0005604790 podman[85023]: 2026-02-02 09:39:46.383796784 +0000 UTC m=+0.057103784 container create afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: Deploying daemon crash.compute-2 on compute-2
Feb  2 04:39:46 np0005604790 systemd[1]: Started libpod-conmon-afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea.scope.
Feb  2 04:39:46 np0005604790 podman[85023]: 2026-02-02 09:39:46.358324455 +0000 UTC m=+0.031631515 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:46 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2fe1cc9d259a1b324b25c2b30f3f19233ef555a49c13099869aa019871b8a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2fe1cc9d259a1b324b25c2b30f3f19233ef555a49c13099869aa019871b8a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:46 np0005604790 podman[85023]: 2026-02-02 09:39:46.488408723 +0000 UTC m=+0.161715703 container init afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:39:46 np0005604790 podman[85023]: 2026-02-02 09:39:46.495626676 +0000 UTC m=+0.168933646 container start afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:39:46 np0005604790 podman[85023]: 2026-02-02 09:39:46.499626503 +0000 UTC m=+0.172933473 container attach afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2428528003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2428528003' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2428528003' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Feb  2 04:39:47 np0005604790 adoring_perlman[85039]: pool 'vms' created
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Feb  2 04:39:47 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:39:47 np0005604790 systemd[1]: libpod-afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea.scope: Deactivated successfully.
Feb  2 04:39:47 np0005604790 podman[85023]: 2026-02-02 09:39:47.484736391 +0000 UTC m=+1.158043401 container died afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:39:47 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6d2fe1cc9d259a1b324b25c2b30f3f19233ef555a49c13099869aa019871b8a7-merged.mount: Deactivated successfully.
Feb  2 04:39:47 np0005604790 podman[85023]: 2026-02-02 09:39:47.535917486 +0000 UTC m=+1.209224496 container remove afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea (image=quay.io/ceph/ceph:v19, name=adoring_perlman, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 04:39:47 np0005604790 systemd[1]: libpod-conmon-afa637c0245369d5b98217bf334bd97a334a933ca9771687dbb13e3a23f440ea.scope: Deactivated successfully.
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:47 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 1c62b8c5-3166-49a0-ba21-a5f1f01c18e5 (Updating crash deployment (+1 -> 3))
Feb  2 04:39:47 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 1c62b8c5-3166-49a0-ba21-a5f1f01c18e5 (Updating crash deployment (+1 -> 3)) in 2 seconds
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:47 np0005604790 python3[85104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:47 np0005604790 podman[85140]: 2026-02-02 09:39:47.928251409 +0000 UTC m=+0.053569120 container create 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:39:47 np0005604790 systemd[1]: Started libpod-conmon-1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab.scope.
Feb  2 04:39:47 np0005604790 podman[85140]: 2026-02-02 09:39:47.903202101 +0000 UTC m=+0.028519882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307d9e0b496077e49a45df886127b0a72e9540700a7210162edb9703ec3ec774/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307d9e0b496077e49a45df886127b0a72e9540700a7210162edb9703ec3ec774/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 podman[85140]: 2026-02-02 09:39:48.036721851 +0000 UTC m=+0.162039522 container init 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:48 np0005604790 podman[85140]: 2026-02-02 09:39:48.044298433 +0000 UTC m=+0.169616104 container start 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:39:48 np0005604790 podman[85140]: 2026-02-02 09:39:48.047858578 +0000 UTC m=+0.173176289 container attach 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.302055297 +0000 UTC m=+0.051931846 container create 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:39:48 np0005604790 systemd[1]: Started libpod-conmon-399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380.scope.
Feb  2 04:39:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.283116572 +0000 UTC m=+0.032993181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.377269503 +0000 UTC m=+0.127146072 container init 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.383668293 +0000 UTC m=+0.133544832 container start 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.386675003 +0000 UTC m=+0.136551542 container attach 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:39:48 np0005604790 awesome_dewdney[85246]: 167 167
Feb  2 04:39:48 np0005604790 systemd[1]: libpod-399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380.scope: Deactivated successfully.
Feb  2 04:39:48 np0005604790 conmon[85246]: conmon 399d288d688000394569 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380.scope/container/memory.events
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.391612635 +0000 UTC m=+0.141489174 container died 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:39:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6ee67db939ff0c000c0edf102dcd36bd10b14dad7ca3194b1af6d497473a1b76-merged.mount: Deactivated successfully.
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4145763547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:48 np0005604790 podman[85229]: 2026-02-02 09:39:48.441836234 +0000 UTC m=+0.191712773 container remove 399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:48 np0005604790 systemd[1]: libpod-conmon-399d288d688000394569d7374b75a46a44443b68494e314070568e5d4b2d4380.scope: Deactivated successfully.
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2428528003' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4145763547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4145763547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Feb  2 04:39:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Feb  2 04:39:48 np0005604790 distracted_thompson[85169]: pool 'volumes' created
Feb  2 04:39:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:39:48 np0005604790 systemd[1]: libpod-1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab.scope: Deactivated successfully.
Feb  2 04:39:48 np0005604790 podman[85140]: 2026-02-02 09:39:48.502943944 +0000 UTC m=+0.628261635 container died 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-307d9e0b496077e49a45df886127b0a72e9540700a7210162edb9703ec3ec774-merged.mount: Deactivated successfully.
Feb  2 04:39:48 np0005604790 podman[85140]: 2026-02-02 09:39:48.537439774 +0000 UTC m=+0.662757445 container remove 1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab (image=quay.io/ceph/ceph:v19, name=distracted_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:39:48 np0005604790 systemd[1]: libpod-conmon-1a9ffe65a981a07e835d5d834cdc41f597eeef4089f071d750fe3acfd70029ab.scope: Deactivated successfully.
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.578438817 +0000 UTC m=+0.041756854 container create f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:39:48 np0005604790 systemd[1]: Started libpod-conmon-f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6.scope.
Feb  2 04:39:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.649388829 +0000 UTC m=+0.112706906 container init f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.559720118 +0000 UTC m=+0.023038195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.656285663 +0000 UTC m=+0.119603700 container start f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.660372042 +0000 UTC m=+0.123690119 container attach f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:39:48 np0005604790 python3[85334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v61: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:48 np0005604790 podman[85335]: 2026-02-02 09:39:48.851650793 +0000 UTC m=+0.041193780 container create e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:39:48 np0005604790 systemd[1]: Started libpod-conmon-e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1.scope.
Feb  2 04:39:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0346338c7cdbec4e9a30de04b777213c4f5927f13ff025cb94b5c01f9403263d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0346338c7cdbec4e9a30de04b777213c4f5927f13ff025cb94b5c01f9403263d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:48 np0005604790 podman[85335]: 2026-02-02 09:39:48.928843201 +0000 UTC m=+0.118386178 container init e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:39:48 np0005604790 podman[85335]: 2026-02-02 09:39:48.835126042 +0000 UTC m=+0.024669049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:48 np0005604790 podman[85335]: 2026-02-02 09:39:48.935225111 +0000 UTC m=+0.124768088 container start e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 04:39:48 np0005604790 podman[85335]: 2026-02-02 09:39:48.938638643 +0000 UTC m=+0.128181650 container attach e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:48 np0005604790 hardcore_williams[85305]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:39:48 np0005604790 hardcore_williams[85305]: --> All data devices are unavailable
Feb  2 04:39:48 np0005604790 systemd[1]: libpod-f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6.scope: Deactivated successfully.
Feb  2 04:39:48 np0005604790 podman[85284]: 2026-02-02 09:39:48.973897333 +0000 UTC m=+0.437215360 container died f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Feb  2 04:39:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8151bce1dc3990d59564baf4788aca595e2732c36c96a9c959e11952e15b0d17-merged.mount: Deactivated successfully.
Feb  2 04:39:49 np0005604790 podman[85284]: 2026-02-02 09:39:49.009592695 +0000 UTC m=+0.472910722 container remove f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:49 np0005604790 systemd[1]: libpod-conmon-f45350c704243b107fa3c1cd54a1b5de288beebfa30b2d099fe53bcee8677db6.scope: Deactivated successfully.
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619661592' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 5 completed events
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"} v 0)
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2619661592' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"}]': finished
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Feb  2 04:39:49 np0005604790 stoic_babbage[85354]: pool 'backups' created
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:49 np0005604790 podman[85335]: 2026-02-02 09:39:49.44180477 +0000 UTC m=+0.631347747 container died e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:49 np0005604790 systemd[1]: libpod-e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1.scope: Deactivated successfully.
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0346338c7cdbec4e9a30de04b777213c4f5927f13ff025cb94b5c01f9403263d-merged.mount: Deactivated successfully.
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4145763547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2619661592' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/1508149425' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"}]: dispatch
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2619661592' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:49 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d6a8c5e6-c7a4-4174-b954-0533ecfedcd2"}]': finished
Feb  2 04:39:49 np0005604790 podman[85335]: 2026-02-02 09:39:49.499216261 +0000 UTC m=+0.688759238 container remove e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1 (image=quay.io/ceph/ceph:v19, name=stoic_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:39:49 np0005604790 systemd[1]: libpod-conmon-e40316bd916114137704946f0bc6cfc24f2dd203657d8945253f36d50d12a1f1.scope: Deactivated successfully.
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.613423407 +0000 UTC m=+0.057472184 container create 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:39:49 np0005604790 systemd[1]: Started libpod-conmon-46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90.scope.
Feb  2 04:39:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.593758002 +0000 UTC m=+0.037806789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.693767329 +0000 UTC m=+0.137816106 container init 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.698871216 +0000 UTC m=+0.142919953 container start 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.702101812 +0000 UTC m=+0.146150649 container attach 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:49 np0005604790 priceless_hoover[85542]: 167 167
Feb  2 04:39:49 np0005604790 systemd[1]: libpod-46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90.scope: Deactivated successfully.
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.705228425 +0000 UTC m=+0.149277192 container died 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ccc7abbc4501e946c2236bff578ba5a3d5cf096b99d6f688a475271b5d42f6d8-merged.mount: Deactivated successfully.
Feb  2 04:39:49 np0005604790 podman[85501]: 2026-02-02 09:39:49.742780036 +0000 UTC m=+0.186828783 container remove 46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:39:49 np0005604790 systemd[1]: libpod-conmon-46f4956dfc1787f16e33165fc21ff56315092e21202c307935115ba39226ec90.scope: Deactivated successfully.
Feb  2 04:39:49 np0005604790 python3[85544]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:49 np0005604790 podman[85561]: 2026-02-02 09:39:49.853587521 +0000 UTC m=+0.040282135 container create b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:49 np0005604790 podman[85577]: 2026-02-02 09:39:49.874715465 +0000 UTC m=+0.043310086 container create 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:39:49 np0005604790 systemd[1]: Started libpod-conmon-b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82.scope.
Feb  2 04:39:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:49 np0005604790 systemd[1]: Started libpod-conmon-44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5.scope.
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb0352ff3dd1a9013782d64bd455eac4307b4a25e0d5f6edd980ae2a4cc6646/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb0352ff3dd1a9013782d64bd455eac4307b4a25e0d5f6edd980ae2a4cc6646/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 podman[85561]: 2026-02-02 09:39:49.834842181 +0000 UTC m=+0.021536825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75d2a88d6cc16af43dbb6a24a90601500c2028c0402e594da30d36aab711e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75d2a88d6cc16af43dbb6a24a90601500c2028c0402e594da30d36aab711e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75d2a88d6cc16af43dbb6a24a90601500c2028c0402e594da30d36aab711e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b75d2a88d6cc16af43dbb6a24a90601500c2028c0402e594da30d36aab711e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:49 np0005604790 podman[85577]: 2026-02-02 09:39:49.852194034 +0000 UTC m=+0.020788635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:49 np0005604790 podman[85561]: 2026-02-02 09:39:49.956082445 +0000 UTC m=+0.142777079 container init b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:39:49 np0005604790 podman[85577]: 2026-02-02 09:39:49.965073644 +0000 UTC m=+0.133668305 container init 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:39:49 np0005604790 podman[85561]: 2026-02-02 09:39:49.970302374 +0000 UTC m=+0.156997008 container start b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:49 np0005604790 podman[85577]: 2026-02-02 09:39:49.973354975 +0000 UTC m=+0.141949566 container start 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 04:39:49 np0005604790 podman[85561]: 2026-02-02 09:39:49.973691214 +0000 UTC m=+0.160385988 container attach b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:39:49 np0005604790 podman[85577]: 2026-02-02 09:39:49.976325904 +0000 UTC m=+0.144920545 container attach 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]: {
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:    "1": [
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:        {
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "devices": [
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "/dev/loop3"
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            ],
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "lv_name": "ceph_lv0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "lv_size": "21470642176",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "name": "ceph_lv0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "tags": {
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.cluster_name": "ceph",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.crush_device_class": "",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.encrypted": "0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.osd_id": "1",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.type": "block",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.vdo": "0",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:                "ceph.with_tpm": "0"
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            },
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "type": "block",
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:            "vg_name": "ceph_vg0"
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:        }
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]:    ]
Feb  2 04:39:50 np0005604790 funny_goldwasser[85601]: }
Feb  2 04:39:50 np0005604790 systemd[1]: libpod-44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5.scope: Deactivated successfully.
Feb  2 04:39:50 np0005604790 podman[85630]: 2026-02-02 09:39:50.334524376 +0000 UTC m=+0.031320946 container died 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8b75d2a88d6cc16af43dbb6a24a90601500c2028c0402e594da30d36aab711e2-merged.mount: Deactivated successfully.
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1316336526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:50 np0005604790 podman[85630]: 2026-02-02 09:39:50.381753816 +0000 UTC m=+0.078550376 container remove 44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 04:39:50 np0005604790 systemd[1]: libpod-conmon-44c838c934c0ba5e3edec0b837ee84185d02e4ee65404953ffce0acb1f569ce5.scope: Deactivated successfully.
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1316336526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:50 np0005604790 amazing_bhaskara[85596]: pool 'images' created
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:50 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:50 np0005604790 systemd[1]: libpod-b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82.scope: Deactivated successfully.
Feb  2 04:39:50 np0005604790 podman[85650]: 2026-02-02 09:39:50.49517844 +0000 UTC m=+0.029182319 container died b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1316336526' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1316336526' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:50 np0005604790 podman[85650]: 2026-02-02 09:39:50.553921396 +0000 UTC m=+0.087925255 container remove b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82 (image=quay.io/ceph/ceph:v19, name=amazing_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:50 np0005604790 systemd[1]: libpod-conmon-b8a48d1334937d7e3043fed3ae891b0e4fab46d923cfda22679ce5fb671a6e82.scope: Deactivated successfully.
Feb  2 04:39:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-afb0352ff3dd1a9013782d64bd455eac4307b4a25e0d5f6edd980ae2a4cc6646-merged.mount: Deactivated successfully.
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v64: 5 pgs: 3 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:50 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac started
Feb  2 04:39:50 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mgr.compute-2.gzlyac 192.168.122.102:0/2795197473; not ready for session (expect reconnect)
Feb  2 04:39:50 np0005604790 python3[85740]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:50 np0005604790 podman[85760]: 2026-02-02 09:39:50.930722084 +0000 UTC m=+0.050935879 container create 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:39:50 np0005604790 systemd[1]: Started libpod-conmon-9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf.scope.
Feb  2 04:39:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:50.910624638 +0000 UTC m=+0.030838463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cf8014ed5564f6dfa425a8f31398665798eec37f55d21f0ae5c1ed76672c1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2cf8014ed5564f6dfa425a8f31398665798eec37f55d21f0ae5c1ed76672c1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:51.030600618 +0000 UTC m=+0.150814413 container init 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:51.038629792 +0000 UTC m=+0.158843627 container start 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:51.042951357 +0000 UTC m=+0.163165152 container attach 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.133693727 +0000 UTC m=+0.060847944 container create 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:39:51 np0005604790 systemd[1]: Started libpod-conmon-723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351.scope.
Feb  2 04:39:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.191440187 +0000 UTC m=+0.118594474 container init 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.199574134 +0000 UTC m=+0.126728351 container start 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.108145536 +0000 UTC m=+0.035299803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:51 np0005604790 tender_bhaskara[85833]: 167 167
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.203527059 +0000 UTC m=+0.130681346 container attach 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Feb  2 04:39:51 np0005604790 systemd[1]: libpod-723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351.scope: Deactivated successfully.
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.20506774 +0000 UTC m=+0.132222017 container died 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6333b8efbecf9eee07025898d11b4733124eb817ba6196af742f0cdd3dc20fd5-merged.mount: Deactivated successfully.
Feb  2 04:39:51 np0005604790 podman[85798]: 2026-02-02 09:39:51.255651819 +0000 UTC m=+0.182806036 container remove 723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:39:51 np0005604790 systemd[1]: libpod-conmon-723f788b55a008ccb70a90f1e2f13e031934253810cf60844f9ed7f39ea84351.scope: Deactivated successfully.
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl started
Feb  2 04:39:51 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from mgr.compute-1.teascl 192.168.122.101:0/3311255887; not ready for session (expect reconnect)
Feb  2 04:39:51 np0005604790 podman[85857]: 2026-02-02 09:39:51.379730618 +0000 UTC m=+0.046467740 container create 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2386791214' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:51 np0005604790 systemd[1]: Started libpod-conmon-44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba.scope.
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2386791214' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Feb  2 04:39:51 np0005604790 sweet_heyrovsky[85794]: pool 'cephfs.cephfs.meta' created
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:51 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:51 np0005604790 podman[85857]: 2026-02-02 09:39:51.357230348 +0000 UTC m=+0.023967540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7ef5f5b486a7b75c162aeaf40e6131e93081142153e1e90519ad46372afbff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7ef5f5b486a7b75c162aeaf40e6131e93081142153e1e90519ad46372afbff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7ef5f5b486a7b75c162aeaf40e6131e93081142153e1e90519ad46372afbff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7ef5f5b486a7b75c162aeaf40e6131e93081142153e1e90519ad46372afbff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 podman[85857]: 2026-02-02 09:39:51.469463511 +0000 UTC m=+0.136200623 container init 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:39:51 np0005604790 podman[85857]: 2026-02-02 09:39:51.475112801 +0000 UTC m=+0.141849903 container start 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:51 np0005604790 podman[85857]: 2026-02-02 09:39:51.478852541 +0000 UTC m=+0.145589643 container attach 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:51 np0005604790 systemd[1]: libpod-9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf.scope: Deactivated successfully.
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:51.483120195 +0000 UTC m=+0.603334030 container died 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2386791214' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2386791214' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.djvyfo(active, since 109s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:39:51 np0005604790 podman[85760]: 2026-02-02 09:39:51.548236511 +0000 UTC m=+0.668450306 container remove 9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf (image=quay.io/ceph/ceph:v19, name=sweet_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"} v 0)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"}]: dispatch
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"} v 0)
Feb  2 04:39:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"}]: dispatch
Feb  2 04:39:51 np0005604790 systemd[1]: libpod-conmon-9e772f431e0e21d35ea9a8501f316c8fe27156a4957751504bc0c927236260bf.scope: Deactivated successfully.
Feb  2 04:39:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b2cf8014ed5564f6dfa425a8f31398665798eec37f55d21f0ae5c1ed76672c1b-merged.mount: Deactivated successfully.
Feb  2 04:39:51 np0005604790 python3[85922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:51 np0005604790 podman[85940]: 2026-02-02 09:39:51.900472085 +0000 UTC m=+0.035399805 container create d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:51 np0005604790 systemd[1]: Started libpod-conmon-d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef.scope.
Feb  2 04:39:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4852e84a2801ea84086fa836374f4f28eb28f0c69ddb3c097fafaf4bfeee86a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4852e84a2801ea84086fa836374f4f28eb28f0c69ddb3c097fafaf4bfeee86a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:51 np0005604790 podman[85940]: 2026-02-02 09:39:51.88418836 +0000 UTC m=+0.019116110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:51 np0005604790 podman[85940]: 2026-02-02 09:39:51.99330098 +0000 UTC m=+0.128228730 container init d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:52 np0005604790 podman[85940]: 2026-02-02 09:39:52.000935324 +0000 UTC m=+0.135863054 container start d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 04:39:52 np0005604790 podman[85940]: 2026-02-02 09:39:52.006613095 +0000 UTC m=+0.141541015 container attach d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb  2 04:39:52 np0005604790 lvm[86030]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:39:52 np0005604790 lvm[86030]: VG ceph_vg0 finished
Feb  2 04:39:52 np0005604790 flamboyant_goldberg[85877]: {}
Feb  2 04:39:52 np0005604790 systemd[1]: libpod-44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba.scope: Deactivated successfully.
Feb  2 04:39:52 np0005604790 systemd[1]: libpod-44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba.scope: Consumed 1.149s CPU time.
Feb  2 04:39:52 np0005604790 conmon[85877]: conmon 44b7cf8267b66dce9b91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba.scope/container/memory.events
Feb  2 04:39:52 np0005604790 podman[85857]: 2026-02-02 09:39:52.287587618 +0000 UTC m=+0.954324750 container died 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:39:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fe7ef5f5b486a7b75c162aeaf40e6131e93081142153e1e90519ad46372afbff-merged.mount: Deactivated successfully.
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4009666663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:52 np0005604790 podman[85857]: 2026-02-02 09:39:52.357506182 +0000 UTC m=+1.024243284 container remove 44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_goldberg, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 04:39:52 np0005604790 systemd[1]: libpod-conmon-44b7cf8267b66dce9b912f985f553c5274679f59140772dd6f3d30382b99b2ba.scope: Deactivated successfully.
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4009666663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Feb  2 04:39:52 np0005604790 sweet_agnesi[85971]: pool 'cephfs.cephfs.data' created
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:52 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:52 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 19 pg[7.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:39:52 np0005604790 systemd[1]: libpod-d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef.scope: Deactivated successfully.
Feb  2 04:39:52 np0005604790 podman[85940]: 2026-02-02 09:39:52.463602731 +0000 UTC m=+0.598530451 container died d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:39:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b4852e84a2801ea84086fa836374f4f28eb28f0c69ddb3c097fafaf4bfeee86a-merged.mount: Deactivated successfully.
Feb  2 04:39:52 np0005604790 podman[85940]: 2026-02-02 09:39:52.504896093 +0000 UTC m=+0.639823863 container remove d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef (image=quay.io/ceph/ceph:v19, name=sweet_agnesi, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:39:52 np0005604790 systemd[1]: libpod-conmon-d142242dc1007e621cc0757545978259f28e447036a2694848301910c785d4ef.scope: Deactivated successfully.
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4009666663' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:39:52 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4009666663' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb  2 04:39:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:52 np0005604790 python3[86089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:52 np0005604790 podman[86090]: 2026-02-02 09:39:52.894119412 +0000 UTC m=+0.053716294 container create 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:39:52 np0005604790 systemd[1]: Started libpod-conmon-9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98.scope.
Feb  2 04:39:52 np0005604790 podman[86090]: 2026-02-02 09:39:52.867838801 +0000 UTC m=+0.027435743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde2f532042fbe068c3b5abdc9060cc5758aa21a4f7876c7ce99f540b5f5e9f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde2f532042fbe068c3b5abdc9060cc5758aa21a4f7876c7ce99f540b5f5e9f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:52 np0005604790 podman[86090]: 2026-02-02 09:39:52.982045717 +0000 UTC m=+0.141642599 container init 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:52 np0005604790 podman[86090]: 2026-02-02 09:39:52.990311507 +0000 UTC m=+0.149908349 container start 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 04:39:52 np0005604790 podman[86090]: 2026-02-02 09:39:52.994027186 +0000 UTC m=+0.153624048 container attach 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859598156' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1859598156' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Feb  2 04:39:53 np0005604790 flamboyant_einstein[86105]: enabled application 'rbd' on pool 'vms'
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:53 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 20 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:39:53 np0005604790 systemd[1]: libpod-9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98.scope: Deactivated successfully.
Feb  2 04:39:53 np0005604790 podman[86090]: 2026-02-02 09:39:53.534325954 +0000 UTC m=+0.693922796 container died 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:39:53 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fde2f532042fbe068c3b5abdc9060cc5758aa21a4f7876c7ce99f540b5f5e9f9-merged.mount: Deactivated successfully.
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1859598156' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Feb  2 04:39:53 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1859598156' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb  2 04:39:53 np0005604790 podman[86090]: 2026-02-02 09:39:53.664139056 +0000 UTC m=+0.823735928 container remove 9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98 (image=quay.io/ceph/ceph:v19, name=flamboyant_einstein, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:39:53 np0005604790 systemd[1]: libpod-conmon-9445e75494e1df5433c6575995d6993b9fb375fe0db264aba6781c77102ebe98.scope: Deactivated successfully.
Feb  2 04:39:54 np0005604790 python3[86167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:54 np0005604790 podman[86168]: 2026-02-02 09:39:54.074038776 +0000 UTC m=+0.052220093 container create b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:39:54 np0005604790 systemd[1]: Started libpod-conmon-b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612.scope.
Feb  2 04:39:54 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8000f2e3c7220f0666178f973666816d400889acf448aa6ed4cc78ed4ec5ab92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8000f2e3c7220f0666178f973666816d400889acf448aa6ed4cc78ed4ec5ab92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:54 np0005604790 podman[86168]: 2026-02-02 09:39:54.137300223 +0000 UTC m=+0.115481550 container init b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:54 np0005604790 podman[86168]: 2026-02-02 09:39:54.045103995 +0000 UTC m=+0.023285372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:54 np0005604790 podman[86168]: 2026-02-02 09:39:54.141155036 +0000 UTC m=+0.119336333 container start b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:39:54 np0005604790 podman[86168]: 2026-02-02 09:39:54.14395476 +0000 UTC m=+0.122136047 container attach b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:54 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb  2 04:39:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/740467932' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Feb  2 04:39:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:39:55 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Feb  2 04:39:55 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/740467932' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/740467932' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Feb  2 04:39:55 np0005604790 busy_hypatia[86184]: enabled application 'rbd' on pool 'volumes'
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:55 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:55 np0005604790 systemd[1]: libpod-b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612.scope: Deactivated successfully.
Feb  2 04:39:55 np0005604790 conmon[86184]: conmon b5ae08cab3b78035c509 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612.scope/container/memory.events
Feb  2 04:39:55 np0005604790 podman[86168]: 2026-02-02 09:39:55.565065407 +0000 UTC m=+1.543246704 container died b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 04:39:55 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8000f2e3c7220f0666178f973666816d400889acf448aa6ed4cc78ed4ec5ab92-merged.mount: Deactivated successfully.
Feb  2 04:39:55 np0005604790 podman[86168]: 2026-02-02 09:39:55.606710887 +0000 UTC m=+1.584892174 container remove b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612 (image=quay.io/ceph/ceph:v19, name=busy_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:55 np0005604790 systemd[1]: libpod-conmon-b5ae08cab3b78035c509518f0b80d9725e9061753ec418064d0cf875f7aae612.scope: Deactivated successfully.
Feb  2 04:39:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:39:55 np0005604790 python3[86246]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:55 np0005604790 podman[86247]: 2026-02-02 09:39:55.992986118 +0000 UTC m=+0.058649335 container create ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 04:39:56 np0005604790 systemd[1]: Started libpod-conmon-ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f.scope.
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:55.965781213 +0000 UTC m=+0.031444480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f0432da52f8582f95ca34d2272d1329197859aa5f9596a2672edec1d5b4be6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f0432da52f8582f95ca34d2272d1329197859aa5f9596a2672edec1d5b4be6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:56.093464088 +0000 UTC m=+0.159127305 container init ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:56.102134459 +0000 UTC m=+0.167797666 container start ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:56.106338681 +0000 UTC m=+0.172001898 container attach ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601645049' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: Deploying daemon osd.2 on compute-2
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/740467932' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1601645049' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1601645049' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Feb  2 04:39:56 np0005604790 exciting_kowalevski[86262]: enabled application 'rbd' on pool 'backups'
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:56 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:56 np0005604790 systemd[1]: libpod-ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f.scope: Deactivated successfully.
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:56.578707447 +0000 UTC m=+0.644370624 container died ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-00f0432da52f8582f95ca34d2272d1329197859aa5f9596a2672edec1d5b4be6-merged.mount: Deactivated successfully.
Feb  2 04:39:56 np0005604790 podman[86247]: 2026-02-02 09:39:56.627529759 +0000 UTC m=+0.693192936 container remove ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f (image=quay.io/ceph/ceph:v19, name=exciting_kowalevski, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:39:56 np0005604790 systemd[1]: libpod-conmon-ed68da88c083889fc3c6c5d57b8283a71a449cdd15e5058fc73eae5002da368f.scope: Deactivated successfully.
Feb  2 04:39:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:56 np0005604790 python3[86325]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:57 np0005604790 podman[86326]: 2026-02-02 09:39:57.026034036 +0000 UTC m=+0.062431096 container create 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:39:57 np0005604790 systemd[1]: Started libpod-conmon-572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0.scope.
Feb  2 04:39:57 np0005604790 podman[86326]: 2026-02-02 09:39:56.99841813 +0000 UTC m=+0.034815270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee6e17078f74192580a332f8844356018b8b1304f01c923bda0af2ee8a40570/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee6e17078f74192580a332f8844356018b8b1304f01c923bda0af2ee8a40570/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:57 np0005604790 podman[86326]: 2026-02-02 09:39:57.125798707 +0000 UTC m=+0.162195807 container init 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:39:57 np0005604790 podman[86326]: 2026-02-02 09:39:57.133291706 +0000 UTC m=+0.169688796 container start 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:39:57 np0005604790 podman[86326]: 2026-02-02 09:39:57.137056737 +0000 UTC m=+0.173453827 container attach 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:39:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb  2 04:39:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2116931678' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Feb  2 04:39:57 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1601645049' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2116931678' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Feb  2 04:39:58 np0005604790 youthful_banach[86342]: enabled application 'rbd' on pool 'images'
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:39:58 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2116931678' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Feb  2 04:39:58 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2116931678' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb  2 04:39:58 np0005604790 systemd[1]: libpod-572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0.scope: Deactivated successfully.
Feb  2 04:39:58 np0005604790 podman[86326]: 2026-02-02 09:39:58.598340284 +0000 UTC m=+1.634737354 container died 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:39:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2ee6e17078f74192580a332f8844356018b8b1304f01c923bda0af2ee8a40570-merged.mount: Deactivated successfully.
Feb  2 04:39:58 np0005604790 podman[86326]: 2026-02-02 09:39:58.641781492 +0000 UTC m=+1.678178582 container remove 572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0 (image=quay.io/ceph/ceph:v19, name=youthful_banach, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:39:58 np0005604790 systemd[1]: libpod-conmon-572e7d5c128946b130f335bc34eae75a94a6e1d2a3068d8d03b145e0932517f0.scope: Deactivated successfully.
Feb  2 04:39:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:39:58 np0005604790 python3[86404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:39:59 np0005604790 podman[86405]: 2026-02-02 09:39:59.069892488 +0000 UTC m=+0.077425275 container create 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:39:59 np0005604790 podman[86405]: 2026-02-02 09:39:59.020691137 +0000 UTC m=+0.028223984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:39:59 np0005604790 systemd[1]: Started libpod-conmon-0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66.scope.
Feb  2 04:39:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:39:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62084220f6909fde962d0b783c5dd60feb4766913d358bdd9d924b6b810a27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62084220f6909fde962d0b783c5dd60feb4766913d358bdd9d924b6b810a27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:39:59 np0005604790 podman[86405]: 2026-02-02 09:39:59.278650976 +0000 UTC m=+0.286183803 container init 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:39:59 np0005604790 podman[86405]: 2026-02-02 09:39:59.284260915 +0000 UTC m=+0.291793702 container start 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:39:59 np0005604790 podman[86405]: 2026-02-02 09:39:59.309524779 +0000 UTC m=+0.317057596 container attach 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:39:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:39:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb  2 04:39:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4028203447' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Feb  2 04:39:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4028203447' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Feb  2 04:40:00 np0005604790 sweet_mclaren[86420]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:00 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:00 np0005604790 systemd[1]: libpod-0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66.scope: Deactivated successfully.
Feb  2 04:40:00 np0005604790 podman[86405]: 2026-02-02 09:40:00.292365268 +0000 UTC m=+1.299898075 container died 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3b62084220f6909fde962d0b783c5dd60feb4766913d358bdd9d924b6b810a27-merged.mount: Deactivated successfully.
Feb  2 04:40:00 np0005604790 podman[86405]: 2026-02-02 09:40:00.362596901 +0000 UTC m=+1.370129698 container remove 0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66 (image=quay.io/ceph/ceph:v19, name=sweet_mclaren, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:40:00 np0005604790 systemd[1]: libpod-conmon-0a8449ea0332fe82d94bd5a86c2f7155842ff57f5a7dff2c88e4aca2be604d66.scope: Deactivated successfully.
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:00 np0005604790 python3[86482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:00 np0005604790 podman[86483]: 2026-02-02 09:40:00.789427874 +0000 UTC m=+0.063495035 container create 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:40:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:40:00 np0005604790 podman[86483]: 2026-02-02 09:40:00.764710244 +0000 UTC m=+0.038777465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:00 np0005604790 systemd[1]: Started libpod-conmon-24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264.scope.
Feb  2 04:40:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d01f4c0bb75e20131f098bcd12b6e4e25d4cab408652c1cca12698b90f67c35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d01f4c0bb75e20131f098bcd12b6e4e25d4cab408652c1cca12698b90f67c35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:00 np0005604790 podman[86483]: 2026-02-02 09:40:00.923757476 +0000 UTC m=+0.197824727 container init 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:00 np0005604790 podman[86483]: 2026-02-02 09:40:00.932617502 +0000 UTC m=+0.206684653 container start 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:00 np0005604790 podman[86483]: 2026-02-02 09:40:00.972447324 +0000 UTC m=+0.246514555 container attach 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4028203447' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Feb  2 04:40:01 np0005604790 ceph-mon[74489]:    application not enabled on pool 'images'
Feb  2 04:40:01 np0005604790 ceph-mon[74489]:    application not enabled on pool 'cephfs.cephfs.meta'
Feb  2 04:40:01 np0005604790 ceph-mon[74489]:    application not enabled on pool 'cephfs.cephfs.data'
Feb  2 04:40:01 np0005604790 ceph-mon[74489]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/4028203447' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb  2 04:40:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1055840720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1055840720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Feb  2 04:40:02 np0005604790 inspiring_lalande[86498]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:02 np0005604790 systemd[1]: libpod-24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264.scope: Deactivated successfully.
Feb  2 04:40:02 np0005604790 podman[86483]: 2026-02-02 09:40:02.160808373 +0000 UTC m=+1.434875514 container died 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9d01f4c0bb75e20131f098bcd12b6e4e25d4cab408652c1cca12698b90f67c35-merged.mount: Deactivated successfully.
Feb  2 04:40:02 np0005604790 podman[86483]: 2026-02-02 09:40:02.431955224 +0000 UTC m=+1.706022395 container remove 24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264 (image=quay.io/ceph/ceph:v19, name=inspiring_lalande, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:02 np0005604790 systemd[1]: libpod-conmon-24cabeaf9658566e020c79215b43717887ffb8147e38743ab333b6902965b264.scope: Deactivated successfully.
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:40:02
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data']
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1055840720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:40:03 np0005604790 python3[86610]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1055840720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: from='osd.2 [v2:192.168.122.102:6800/4043786308,v1:192.168.122.102:6801/4043786308]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 8b5b88b9-bf04-40bf-a7cd-0f8ee73c6f3a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 04:40:03 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e27 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Feb  2 04:40:03 np0005604790 python3[86681]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025203.1436846-37266-120777980252482/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:04 np0005604790 python3[86858]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: Cluster is now healthy
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='osd.2 [v2:192.168.122.102:6800/4043786308,v1:192.168.122.102:6801/4043786308]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:04 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev aeeae2a1-6915-4f29-b08b-54257bc2b37e (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4043786308; not ready for session (expect reconnect)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:04 np0005604790 python3[86952]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025204.0628257-37280-62426078150828/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e9970784b20cf6c5031ab2181e06e134b39f865b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:40:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 python3[87014]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.166131784 +0000 UTC m=+0.066129084 container create 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:05 np0005604790 systemd[1]: Started libpod-conmon-324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc.scope.
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.127953446 +0000 UTC m=+0.027950766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df330560c3c6a595847e7d7d75cbac08f453e0e795d0e738b79264a19aa8bf3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df330560c3c6a595847e7d7d75cbac08f453e0e795d0e738b79264a19aa8bf3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2df330560c3c6a595847e7d7d75cbac08f453e0e795d0e738b79264a19aa8bf3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.29011506 +0000 UTC m=+0.190112380 container init 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.299753737 +0000 UTC m=+0.199751047 container start 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.304912425 +0000 UTC m=+0.204909745 container attach 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:05 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4043786308; not ready for session (expect reconnect)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev fb0ca87d-84da-4724-860f-c66d4a745748 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 04:40:05 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2425208278' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2425208278' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 04:40:05 np0005604790 optimistic_raman[87030]: 
Feb  2 04:40:05 np0005604790 optimistic_raman[87030]: [global]
Feb  2 04:40:05 np0005604790 optimistic_raman[87030]: #011fsid = d241d473-9fcb-5f74-b163-f1ca4454e7f1
Feb  2 04:40:05 np0005604790 optimistic_raman[87030]: #011mon_host = 192.168.122.100
Feb  2 04:40:05 np0005604790 systemd[1]: libpod-324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc.scope: Deactivated successfully.
Feb  2 04:40:05 np0005604790 conmon[87030]: conmon 324cfda33b6ce8444b1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc.scope/container/memory.events
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.688014111 +0000 UTC m=+0.588011391 container died 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2df330560c3c6a595847e7d7d75cbac08f453e0e795d0e738b79264a19aa8bf3-merged.mount: Deactivated successfully.
Feb  2 04:40:05 np0005604790 podman[87015]: 2026-02-02 09:40:05.74833686 +0000 UTC m=+0.648334170 container remove 324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc (image=quay.io/ceph/ceph:v19, name=optimistic_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:40:05 np0005604790 systemd[1]: libpod-conmon-324cfda33b6ce8444b1effe9b345cd0d285b25bedfd9c5ff52d79a1b5f58c3dc.scope: Deactivated successfully.
Feb  2 04:40:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:06 np0005604790 python3[87094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.159551195 +0000 UTC m=+0.064040588 container create 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:06 np0005604790 systemd[1]: Started libpod-conmon-4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326.scope.
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.124284265 +0000 UTC m=+0.028773708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d4f95e2a1f6e36586e6d573fa5f1b9dfd849899d7f1d7de041dc694d2f164/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d4f95e2a1f6e36586e6d573fa5f1b9dfd849899d7f1d7de041dc694d2f164/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9d4f95e2a1f6e36586e6d573fa5f1b9dfd849899d7f1d7de041dc694d2f164/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.239281882 +0000 UTC m=+0.143771265 container init 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.2478471 +0000 UTC m=+0.152336463 container start 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.251697243 +0000 UTC m=+0.156186606 container attach 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4043786308; not ready for session (expect reconnect)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2425208278' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2425208278' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 14899bc0-ab0c-4b48-9547-784e8ba0ac76 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=13.820825577s) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active pruub 69.789848328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=13.820825577s) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown pruub 69.789848328s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.2( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.8( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.11( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.12( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.16( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.14( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.18( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1005529416' entity='client.admin' 
Feb  2 04:40:06 np0005604790 youthful_goldberg[87110]: set ssl_option
Feb  2 04:40:06 np0005604790 systemd[1]: libpod-4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326.scope: Deactivated successfully.
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.795551736 +0000 UTC m=+0.700041139 container died 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v85: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ac9d4f95e2a1f6e36586e6d573fa5f1b9dfd849899d7f1d7de041dc694d2f164-merged.mount: Deactivated successfully.
Feb  2 04:40:06 np0005604790 podman[87095]: 2026-02-02 09:40:06.868398648 +0000 UTC m=+0.772888011 container remove 4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326 (image=quay.io/ceph/ceph:v19, name=youthful_goldberg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:06 np0005604790 systemd[1]: libpod-conmon-4ce05983cd30a1fc4418e0d08f7334632a5e4802673c70ecfafc3d305ecdb326.scope: Deactivated successfully.
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:40:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:07 np0005604790 python3[87199]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.251241458 +0000 UTC m=+0.051715381 container create 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:40:07 np0005604790 systemd[1]: Started libpod-conmon-810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6.scope.
Feb  2 04:40:07 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbce99b632fcce84bd8c11c88798abc88178265662b83dcf9ef41009afa935b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbce99b632fcce84bd8c11c88798abc88178265662b83dcf9ef41009afa935b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbce99b632fcce84bd8c11c88798abc88178265662b83dcf9ef41009afa935b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.23032198 +0000 UTC m=+0.030795923 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.326858904 +0000 UTC m=+0.127332877 container init 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.332036042 +0000 UTC m=+0.132510005 container start 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.336936763 +0000 UTC m=+0.137410686 container attach 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/4043786308; not ready for session (expect reconnect)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1005529416' entity='client.admin' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: OSD bench result of 6231.058141 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: Adjusting osd_memory_target on compute-2 to 127.9M
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/4043786308,v1:192.168.122.102:6801/4043786308] boot
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 41751a78-3317-4cb9-b28b-883a9f5bd6d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.8( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.7( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.0( empty local-lis/les=29/31 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.3( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.12( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.11( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.14( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.16( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.1a( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.2( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 31 pg[2.17( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Feb  2 04:40:07 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 optimistic_meninsky[87338]: Scheduled rgw.rgw update...
Feb  2 04:40:07 np0005604790 optimistic_meninsky[87338]: Scheduled ingress.rgw.default update...
Feb  2 04:40:07 np0005604790 systemd[1]: libpod-810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6.scope: Deactivated successfully.
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.787313903 +0000 UTC m=+0.587787826 container died 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb  2 04:40:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb  2 04:40:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bbbce99b632fcce84bd8c11c88798abc88178265662b83dcf9ef41009afa935b-merged.mount: Deactivated successfully.
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:07 np0005604790 podman[87274]: 2026-02-02 09:40:07.822008148 +0000 UTC m=+0.622482091 container remove 810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:40:07 np0005604790 systemd[75816]: Starting Mark boot as successful...
Feb  2 04:40:07 np0005604790 systemd[75816]: Finished Mark boot as successful.
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 systemd[1]: libpod-conmon-810033eed69d8f09ada92a51562ca9f4f7b805d10ff4780e7105ae64898719f6.scope: Deactivated successfully.
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:40:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:08 np0005604790 python3[87767]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:40:08 np0005604790 python3[87894]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025207.9377892-37299-253135386482739/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.435658961 +0000 UTC m=+0.046939602 container create 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:08 np0005604790 systemd[1]: Started libpod-conmon-0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7.scope.
Feb  2 04:40:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.410780248 +0000 UTC m=+0.022060869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.516018244 +0000 UTC m=+0.127298865 container init 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.523772081 +0000 UTC m=+0.135052682 container start 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:40:08 np0005604790 gallant_mcclintock[87935]: 167 167
Feb  2 04:40:08 np0005604790 systemd[1]: libpod-0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7.scope: Deactivated successfully.
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.543217349 +0000 UTC m=+0.154497950 container attach 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.543577019 +0000 UTC m=+0.154857620 container died 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:40:08 np0005604790 systemd[1]: var-lib-containers-storage-overlay-da40faf1a31d24b166450964764501956615f9990d39d2530e9d85a7c95b99c0-merged.mount: Deactivated successfully.
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb  2 04:40:08 np0005604790 podman[87912]: 2026-02-02 09:40:08.656927811 +0000 UTC m=+0.268208452 container remove 0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:40:08 np0005604790 systemd[1]: libpod-conmon-0c417fdc67303c62bf34e6711177fbaafcb7c7cd4d87d822d204fbfe0e467cf7.scope: Deactivated successfully.
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 3e10d10d-372a-477f-a864-91a8046efa90 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 8b5b88b9-bf04-40bf-a7cd-0f8ee73c6f3a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 8b5b88b9-bf04-40bf-a7cd-0f8ee73c6f3a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev aeeae2a1-6915-4f29-b08b-54257bc2b37e (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event aeeae2a1-6915-4f29-b08b-54257bc2b37e (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev fb0ca87d-84da-4724-860f-c66d4a745748 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event fb0ca87d-84da-4724-860f-c66d4a745748 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 14899bc0-ab0c-4b48-9547-784e8ba0ac76 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 14899bc0-ab0c-4b48-9547-784e8ba0ac76 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 41751a78-3317-4cb9-b28b-883a9f5bd6d8 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 41751a78-3317-4cb9-b28b-883a9f5bd6d8 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 3e10d10d-372a-477f-a864-91a8046efa90 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 3e10d10d-372a-477f-a864-91a8046efa90 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: osd.2 [v2:192.168.122.102:6800/4043786308,v1:192.168.122.102:6801/4043786308] boot
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: Saving service ingress.rgw.default spec with placement count:2
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:40:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb  2 04:40:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb  2 04:40:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v88: 131 pgs: 33 peering, 32 activating, 62 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:08 np0005604790 podman[87976]: 2026-02-02 09:40:08.81477226 +0000 UTC m=+0.056663802 container create ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:40:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:08 np0005604790 systemd[1]: Started libpod-conmon-ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e.scope.
Feb  2 04:40:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:08 np0005604790 podman[87976]: 2026-02-02 09:40:08.796270277 +0000 UTC m=+0.038161909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:08 np0005604790 podman[87976]: 2026-02-02 09:40:08.896087599 +0000 UTC m=+0.137979141 container init ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:40:08 np0005604790 podman[87976]: 2026-02-02 09:40:08.901787941 +0000 UTC m=+0.143679483 container start ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:40:08 np0005604790 podman[87976]: 2026-02-02 09:40:08.918713582 +0000 UTC m=+0.160605124 container attach ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:09 np0005604790 python3[88020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:09 np0005604790 podman[88024]: 2026-02-02 09:40:09.116332142 +0000 UTC m=+0.073225704 container create 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:09 np0005604790 podman[88024]: 2026-02-02 09:40:09.067436778 +0000 UTC m=+0.024330420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:09 np0005604790 unruffled_hodgkin[88018]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:40:09 np0005604790 unruffled_hodgkin[88018]: --> All data devices are unavailable
Feb  2 04:40:09 np0005604790 systemd[1]: Started libpod-conmon-0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368.scope.
Feb  2 04:40:09 np0005604790 podman[87976]: 2026-02-02 09:40:09.200578968 +0000 UTC m=+0.442470530 container died ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:40:09 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:09 np0005604790 systemd[1]: libpod-ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e.scope: Deactivated successfully.
Feb  2 04:40:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55f2f9ce6773f84208d90dc984bc43c7bedafbcfa3572b46288c68eded8291e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55f2f9ce6773f84208d90dc984bc43c7bedafbcfa3572b46288c68eded8291e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55f2f9ce6773f84208d90dc984bc43c7bedafbcfa3572b46288c68eded8291e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:09 np0005604790 podman[88024]: 2026-02-02 09:40:09.36039572 +0000 UTC m=+0.317289382 container init 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:40:09 np0005604790 podman[88024]: 2026-02-02 09:40:09.369255946 +0000 UTC m=+0.326149508 container start 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:40:09 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 11 completed events
Feb  2 04:40:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:40:09 np0005604790 podman[88024]: 2026-02-02 09:40:09.48265481 +0000 UTC m=+0.439548382 container attach 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ac043f877e514f09c40d07e6c3b8b7e58fcbb97f64a8964908e1a43844cc54aa-merged.mount: Deactivated successfully.
Feb  2 04:40:09 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb  2 04:40:09 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service node-exporter spec with placement *
Feb  2 04:40:09 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Feb  2 04:40:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb  2 04:40:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb  2 04:40:09 np0005604790 podman[87976]: 2026-02-02 09:40:09.795122073 +0000 UTC m=+1.037013615 container remove ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb  2 04:40:09 np0005604790 systemd[1]: libpod-conmon-ccb613de4a5e600f87551272985fb94854f76584f8e071ad2db618bf74e0eb1e.scope: Deactivated successfully.
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:10 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=15.470458984s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active pruub 74.826347351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb  2 04:40:10 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=15.470458984s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown pruub 74.826347351s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.324805368 +0000 UTC m=+0.037581333 container create 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:40:10 np0005604790 systemd[1]: Started libpod-conmon-098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8.scope.
Feb  2 04:40:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.396371286 +0000 UTC m=+0.109147351 container init 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.404025421 +0000 UTC m=+0.116801376 container start 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.309786238 +0000 UTC m=+0.022562213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.409899277 +0000 UTC m=+0.122675232 container attach 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 04:40:10 np0005604790 bold_mendel[88191]: 167 167
Feb  2 04:40:10 np0005604790 systemd[1]: libpod-098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8.scope: Deactivated successfully.
Feb  2 04:40:10 np0005604790 conmon[88191]: conmon 098ba69ccb102507dd53 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8.scope/container/memory.events
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.412848076 +0000 UTC m=+0.125624021 container died 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-700a1274f0158ccf40d31017695099038d2c213444c0af553583713a483aa02b-merged.mount: Deactivated successfully.
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:10 np0005604790 bold_sinoussi[88050]: Scheduled node-exporter update...
Feb  2 04:40:10 np0005604790 bold_sinoussi[88050]: Scheduled grafana update...
Feb  2 04:40:10 np0005604790 bold_sinoussi[88050]: Scheduled prometheus update...
Feb  2 04:40:10 np0005604790 bold_sinoussi[88050]: Scheduled alertmanager update...
Feb  2 04:40:10 np0005604790 podman[88175]: 2026-02-02 09:40:10.444565352 +0000 UTC m=+0.157341317 container remove 098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:40:10 np0005604790 systemd[1]: libpod-conmon-098ba69ccb102507dd5307e105018eb01ab0ac450dd62004f6d507ef9bd017d8.scope: Deactivated successfully.
Feb  2 04:40:10 np0005604790 systemd[1]: libpod-0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368.scope: Deactivated successfully.
Feb  2 04:40:10 np0005604790 podman[88024]: 2026-02-02 09:40:10.458247947 +0000 UTC m=+1.415141519 container died 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a55f2f9ce6773f84208d90dc984bc43c7bedafbcfa3572b46288c68eded8291e-merged.mount: Deactivated successfully.
Feb  2 04:40:10 np0005604790 podman[88024]: 2026-02-02 09:40:10.493350753 +0000 UTC m=+1.450244315 container remove 0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368 (image=quay.io/ceph/ceph:v19, name=bold_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:10 np0005604790 systemd[1]: libpod-conmon-0b6091a4cb9be3ac80e77d80241bdae205fb17e4fca59c835e9b55c40d73b368.scope: Deactivated successfully.
Feb  2 04:40:10 np0005604790 podman[88229]: 2026-02-02 09:40:10.581343539 +0000 UTC m=+0.048930256 container create e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:40:10 np0005604790 systemd[1]: Started libpod-conmon-e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727.scope.
Feb  2 04:40:10 np0005604790 podman[88229]: 2026-02-02 09:40:10.555777037 +0000 UTC m=+0.023363774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc41b08e30ad20b05d9f0a79a3b846f6779bf28fcc390b59b022149d53582e29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc41b08e30ad20b05d9f0a79a3b846f6779bf28fcc390b59b022149d53582e29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc41b08e30ad20b05d9f0a79a3b846f6779bf28fcc390b59b022149d53582e29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc41b08e30ad20b05d9f0a79a3b846f6779bf28fcc390b59b022149d53582e29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:10 np0005604790 podman[88229]: 2026-02-02 09:40:10.677592186 +0000 UTC m=+0.145178883 container init e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:10 np0005604790 podman[88229]: 2026-02-02 09:40:10.690272184 +0000 UTC m=+0.157858861 container start e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:10 np0005604790 podman[88229]: 2026-02-02 09:40:10.694411304 +0000 UTC m=+0.161998011 container attach e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 33 peering, 32 activating, 124 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb  2 04:40:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb  2 04:40:10 np0005604790 python3[88275]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]: {
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:    "1": [
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:        {
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "devices": [
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "/dev/loop3"
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            ],
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "lv_name": "ceph_lv0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "lv_size": "21470642176",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "name": "ceph_lv0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "tags": {
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.cluster_name": "ceph",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.crush_device_class": "",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.encrypted": "0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.osd_id": "1",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.type": "block",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.vdo": "0",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:                "ceph.with_tpm": "0"
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            },
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "type": "block",
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:            "vg_name": "ceph_vg0"
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:        }
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]:    ]
Feb  2 04:40:11 np0005604790 vigorous_goldstine[88245]: }
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.039837306 +0000 UTC m=+0.049983754 container create 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: Saving service node-exporter spec with placement *
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: Saving service grafana spec with placement compute-0;count:1
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: Saving service prometheus spec with placement compute-0;count:1
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: Saving service alertmanager spec with placement compute-0;count:1
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:11 np0005604790 podman[88229]: 2026-02-02 09:40:11.068029517 +0000 UTC m=+0.535616194 container died e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 04:40:11 np0005604790 systemd[1]: libpod-e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727.scope: Deactivated successfully.
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb  2 04:40:11 np0005604790 systemd[1]: Started libpod-conmon-595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb.scope.
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1c( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.17( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.12( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bc41b08e30ad20b05d9f0a79a3b846f6779bf28fcc390b59b022149d53582e29-merged.mount: Deactivated successfully.
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.15( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=33/34 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.7( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.d( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.19( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.c( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1a( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 34 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.021572969 +0000 UTC m=+0.031719417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e936d89161ab06c8238f2303cb5331f53bf604adcc8c8fac5c3c4dd9b734d1a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e936d89161ab06c8238f2303cb5331f53bf604adcc8c8fac5c3c4dd9b734d1a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e936d89161ab06c8238f2303cb5331f53bf604adcc8c8fac5c3c4dd9b734d1a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:11 np0005604790 podman[88229]: 2026-02-02 09:40:11.118635607 +0000 UTC m=+0.586222284 container remove e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 04:40:11 np0005604790 systemd[1]: libpod-conmon-e09c8cf9561162a402f2661a771146d06445640ff97e654b8ee851c00af2f727.scope: Deactivated successfully.
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.130526274 +0000 UTC m=+0.140672702 container init 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.136440192 +0000 UTC m=+0.146586600 container start 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.13901544 +0000 UTC m=+0.149161868 container attach 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Feb  2 04:40:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2304504428' entity='client.admin' 
Feb  2 04:40:11 np0005604790 systemd[1]: libpod-595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb.scope: Deactivated successfully.
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.525082236 +0000 UTC m=+0.535228644 container died 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e936d89161ab06c8238f2303cb5331f53bf604adcc8c8fac5c3c4dd9b734d1a6-merged.mount: Deactivated successfully.
Feb  2 04:40:11 np0005604790 podman[88280]: 2026-02-02 09:40:11.686072369 +0000 UTC m=+0.696218777 container remove 595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb (image=quay.io/ceph/ceph:v19, name=boring_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:40:11 np0005604790 systemd[1]: libpod-conmon-595d89e2e5e5903548aec72f477d11d5299eff6dc6047fdc4107fc0c298c2aeb.scope: Deactivated successfully.
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb  2 04:40:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.83386962 +0000 UTC m=+0.037366947 container create 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:11 np0005604790 systemd[1]: Started libpod-conmon-8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617.scope.
Feb  2 04:40:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.816524928 +0000 UTC m=+0.020022325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.935257654 +0000 UTC m=+0.138754951 container init 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.940167555 +0000 UTC m=+0.143664852 container start 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:11 np0005604790 upbeat_feistel[88478]: 167 167
Feb  2 04:40:11 np0005604790 systemd[1]: libpod-8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617.scope: Deactivated successfully.
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.952305038 +0000 UTC m=+0.155802365 container attach 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.953061619 +0000 UTC m=+0.156558946 container died 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5cd8cafa0154d6bdd984e689b03858c4c2a3314de42a6b6b6c1fce89cde71914-merged.mount: Deactivated successfully.
Feb  2 04:40:11 np0005604790 podman[88435]: 2026-02-02 09:40:11.99177347 +0000 UTC m=+0.195270777 container remove 8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:12 np0005604790 systemd[1]: libpod-conmon-8277841cdd60f8a907143c071e25962f98e238587f53749935b773b3f326d617.scope: Deactivated successfully.
Feb  2 04:40:12 np0005604790 python3[88475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.109242742 +0000 UTC m=+0.081950676 container create f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2304504428' entity='client.admin' 
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.152938588 +0000 UTC m=+0.077043436 container create 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.061657543 +0000 UTC m=+0.034365567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:12 np0005604790 systemd[1]: Started libpod-conmon-f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4.scope.
Feb  2 04:40:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:12 np0005604790 systemd[1]: Started libpod-conmon-96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883.scope.
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1487d5ada894019957fd2680ed2b7cccf2c2a404f3a96b003d29be6c0248170d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1487d5ada894019957fd2680ed2b7cccf2c2a404f3a96b003d29be6c0248170d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1487d5ada894019957fd2680ed2b7cccf2c2a404f3a96b003d29be6c0248170d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8545a1ef730b9dad43cf3a2cf17c0047bd69cb5534458907631553c5444cfaae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.206397263 +0000 UTC m=+0.179105207 container init f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8545a1ef730b9dad43cf3a2cf17c0047bd69cb5534458907631553c5444cfaae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8545a1ef730b9dad43cf3a2cf17c0047bd69cb5534458907631553c5444cfaae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8545a1ef730b9dad43cf3a2cf17c0047bd69cb5534458907631553c5444cfaae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.123872903 +0000 UTC m=+0.047977831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.226095148 +0000 UTC m=+0.198803092 container start f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.261142783 +0000 UTC m=+0.185247641 container init 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.266782863 +0000 UTC m=+0.190887711 container start 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.321527893 +0000 UTC m=+0.245632781 container attach 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.361741706 +0000 UTC m=+0.334449640 container attach f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3997762270' entity='client.admin' 
Feb  2 04:40:12 np0005604790 systemd[1]: libpod-f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4.scope: Deactivated successfully.
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.713385013 +0000 UTC m=+0.686092947 container died f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:12 np0005604790 lvm[88646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:40:12 np0005604790 lvm[88646]: VG ceph_vg0 finished
Feb  2 04:40:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 65 peering, 31 unknown, 97 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1487d5ada894019957fd2680ed2b7cccf2c2a404f3a96b003d29be6c0248170d-merged.mount: Deactivated successfully.
Feb  2 04:40:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Feb  2 04:40:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Feb  2 04:40:12 np0005604790 podman[88497]: 2026-02-02 09:40:12.86288733 +0000 UTC m=+0.835595264 container remove f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4 (image=quay.io/ceph/ceph:v19, name=thirsty_lewin, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:12 np0005604790 systemd[1]: libpod-conmon-f09f82c28abaab53f240ccd361f29989be2817a6ddc423008d5dbf199d7a08c4.scope: Deactivated successfully.
Feb  2 04:40:12 np0005604790 romantic_hoover[88537]: {}
Feb  2 04:40:12 np0005604790 systemd[1]: libpod-96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883.scope: Deactivated successfully.
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.910660684 +0000 UTC m=+0.834765542 container died 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:40:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8545a1ef730b9dad43cf3a2cf17c0047bd69cb5534458907631553c5444cfaae-merged.mount: Deactivated successfully.
Feb  2 04:40:12 np0005604790 podman[88515]: 2026-02-02 09:40:12.945113722 +0000 UTC m=+0.869218570 container remove 96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hoover, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:40:12 np0005604790 systemd[1]: libpod-conmon-96e3e7f3b83e9b4920eca5cc53bab52ff5f9caed7c34aec5f17db5daa8561883.scope: Deactivated successfully.
Feb  2 04:40:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 4e3b93db-f639-4917-a2f6-3808ae387925 (Updating rgw.rgw deployment (+3 -> 3))
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zjyufj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zjyufj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zjyufj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:13 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.zjyufj on compute-2
Feb  2 04:40:13 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.zjyufj on compute-2
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3997762270' entity='client.admin' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zjyufj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zjyufj", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: Deploying daemon rgw.rgw.compute-2.zjyufj on compute-2
Feb  2 04:40:13 np0005604790 python3[88688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:13 np0005604790 podman[88689]: 2026-02-02 09:40:13.247123096 +0000 UTC m=+0.058272385 container create 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:40:13 np0005604790 systemd[1]: Started libpod-conmon-9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d.scope.
Feb  2 04:40:13 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e472b8b3f937a0af401b9f4efc8c7a414633e3f3e4838b14672b3fe67eeba28f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e472b8b3f937a0af401b9f4efc8c7a414633e3f3e4838b14672b3fe67eeba28f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e472b8b3f937a0af401b9f4efc8c7a414633e3f3e4838b14672b3fe67eeba28f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:13 np0005604790 podman[88689]: 2026-02-02 09:40:13.224069981 +0000 UTC m=+0.035219350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:13 np0005604790 podman[88689]: 2026-02-02 09:40:13.343130996 +0000 UTC m=+0.154280375 container init 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:13 np0005604790 podman[88689]: 2026-02-02 09:40:13.349398523 +0000 UTC m=+0.160547812 container start 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:40:13 np0005604790 podman[88689]: 2026-02-02 09:40:13.357542581 +0000 UTC m=+0.168691900 container attach 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Feb  2 04:40:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3629224831' entity='client.admin' 
Feb  2 04:40:13 np0005604790 systemd[1]: libpod-9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d.scope: Deactivated successfully.
Feb  2 04:40:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb  2 04:40:13 np0005604790 podman[88729]: 2026-02-02 09:40:13.798381887 +0000 UTC m=+0.024384552 container died 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb  2 04:40:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e472b8b3f937a0af401b9f4efc8c7a414633e3f3e4838b14672b3fe67eeba28f-merged.mount: Deactivated successfully.
Feb  2 04:40:13 np0005604790 podman[88729]: 2026-02-02 09:40:13.834569022 +0000 UTC m=+0.060571677 container remove 9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d (image=quay.io/ceph/ceph:v19, name=loving_leakey, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:13 np0005604790 systemd[1]: libpod-conmon-9d14c1fe43f17de4c43376e12595a1bdbedc5870b7c05bde984b24293b1d915d.scope: Deactivated successfully.
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3629224831' entity='client.admin' 
Feb  2 04:40:14 np0005604790 python3[88769]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ezjvcf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ezjvcf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ezjvcf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:14 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.ezjvcf on compute-1
Feb  2 04:40:14 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.ezjvcf on compute-1
Feb  2 04:40:14 np0005604790 python3[88807]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.djvyfo/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 32 peering, 161 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:14 np0005604790 podman[88810]: 2026-02-02 09:40:14.890855189 +0000 UTC m=+0.052129691 container create 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:40:14 np0005604790 systemd[1]: Started libpod-conmon-8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624.scope.
Feb  2 04:40:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e03f152384c935c91fbf903d01fe7f18b4452611e15bf253b93648cfb6228af/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e03f152384c935c91fbf903d01fe7f18b4452611e15bf253b93648cfb6228af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e03f152384c935c91fbf903d01fe7f18b4452611e15bf253b93648cfb6228af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:14 np0005604790 podman[88810]: 2026-02-02 09:40:14.865100892 +0000 UTC m=+0.026375394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:14 np0005604790 podman[88810]: 2026-02-02 09:40:14.990128487 +0000 UTC m=+0.151402989 container init 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:14 np0005604790 podman[88810]: 2026-02-02 09:40:14.998149521 +0000 UTC m=+0.159424023 container start 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:40:15 np0005604790 podman[88810]: 2026-02-02 09:40:15.003156254 +0000 UTC m=+0.164430756 container attach 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb  2 04:40:15 np0005604790 ceph-mgr[74785]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ezjvcf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.ezjvcf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: Deploying daemon rgw.rgw.compute-1.ezjvcf on compute-1
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.djvyfo/server_addr}] v 0)
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3477511090' entity='client.admin' 
Feb  2 04:40:15 np0005604790 systemd[1]: libpod-8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624.scope: Deactivated successfully.
Feb  2 04:40:15 np0005604790 podman[88810]: 2026-02-02 09:40:15.375197085 +0000 UTC m=+0.536471577 container died 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 04:40:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9e03f152384c935c91fbf903d01fe7f18b4452611e15bf253b93648cfb6228af-merged.mount: Deactivated successfully.
Feb  2 04:40:15 np0005604790 podman[88810]: 2026-02-02 09:40:15.431358403 +0000 UTC m=+0.592632885 container remove 8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624 (image=quay.io/ceph/ceph:v19, name=interesting_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:40:15 np0005604790 systemd[1]: libpod-conmon-8239de5691598248f6414b07b7133b54e9b2bdd2779c73cd00655124578a0624.scope: Deactivated successfully.
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb  2 04:40:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb  2 04:40:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb  2 04:40:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3477511090' entity='client.admin' 
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/343742408' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vltabo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vltabo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vltabo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:16 np0005604790 python3[88887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.teascl/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:16 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.vltabo on compute-0
Feb  2 04:40:16 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.vltabo on compute-0
Feb  2 04:40:16 np0005604790 podman[88888]: 2026-02-02 09:40:16.523820985 +0000 UTC m=+0.051445573 container create 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:16 np0005604790 systemd[1]: Started libpod-conmon-4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1.scope.
Feb  2 04:40:16 np0005604790 podman[88888]: 2026-02-02 09:40:16.493334472 +0000 UTC m=+0.020959040 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e95bed1c52314026fc8830faee36800f46007cbbbdafc910d72fac4e1cdc54/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e95bed1c52314026fc8830faee36800f46007cbbbdafc910d72fac4e1cdc54/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e95bed1c52314026fc8830faee36800f46007cbbbdafc910d72fac4e1cdc54/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:16 np0005604790 podman[88888]: 2026-02-02 09:40:16.622605319 +0000 UTC m=+0.150229887 container init 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:16 np0005604790 podman[88888]: 2026-02-02 09:40:16.628171577 +0000 UTC m=+0.155796125 container start 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:16 np0005604790 podman[88888]: 2026-02-02 09:40:16.631097635 +0000 UTC m=+0.158722193 container attach 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb  2 04:40:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb  2 04:40:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb  2 04:40:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v97: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:16 np0005604790 podman[89021]: 2026-02-02 09:40:16.976827165 +0000 UTC m=+0.050043146 container create a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.teascl/server_addr}] v 0)
Feb  2 04:40:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1069846288' entity='client.admin' 
Feb  2 04:40:17 np0005604790 systemd[1]: Started libpod-conmon-a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e.scope.
Feb  2 04:40:17 np0005604790 systemd[1]: libpod-4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1.scope: Deactivated successfully.
Feb  2 04:40:17 np0005604790 podman[88888]: 2026-02-02 09:40:17.021695061 +0000 UTC m=+0.549319619 container died 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:40:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:16.952924098 +0000 UTC m=+0.026140129 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-83e95bed1c52314026fc8830faee36800f46007cbbbdafc910d72fac4e1cdc54-merged.mount: Deactivated successfully.
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:17.062868019 +0000 UTC m=+0.136084010 container init a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:17.067120143 +0000 UTC m=+0.140336124 container start a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:40:17 np0005604790 upbeat_bouman[89038]: 167 167
Feb  2 04:40:17 np0005604790 systemd[1]: libpod-a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e.scope: Deactivated successfully.
Feb  2 04:40:17 np0005604790 podman[88888]: 2026-02-02 09:40:17.07113201 +0000 UTC m=+0.598756558 container remove 4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1 (image=quay.io/ceph/ceph:v19, name=nervous_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:40:17 np0005604790 systemd[1]: libpod-conmon-4c46e2d88c24919aea974ba6ae5714744f0c834da3a96477c76ea2f58deaefb1.scope: Deactivated successfully.
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:17.079310548 +0000 UTC m=+0.152526519 container attach a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:17.080479869 +0000 UTC m=+0.153695860 container died a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f454fbb0a74ddb0be9a36003c3d86b9082adc718f08e37800e5ab48d276960a7-merged.mount: Deactivated successfully.
Feb  2 04:40:17 np0005604790 podman[89021]: 2026-02-02 09:40:17.136619706 +0000 UTC m=+0.209835667 container remove a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:40:17 np0005604790 systemd[1]: libpod-conmon-a7a4eb6d1f99b807835272efe66cab53545c16bc760d66d33942aa597c07fc0e.scope: Deactivated successfully.
Feb  2 04:40:17 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:17 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:17 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vltabo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vltabo", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: Deploying daemon rgw.rgw.compute-0.vltabo on compute-0
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1069846288' entity='client.admin' 
Feb  2 04:40:17 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:17 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:17 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:17 np0005604790 systemd[1]: Starting Ceph rgw.rgw.compute-0.vltabo for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb  2 04:40:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb  2 04:40:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb  2 04:40:17 np0005604790 python3[89188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.gzlyac/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:17 np0005604790 podman[89221]: 2026-02-02 09:40:17.925329769 +0000 UTC m=+0.041622001 container create 2dea2149c1564b5d8c5d4eba925c3597c22745b2ed907d0270e5352d9fbd157f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-rgw-rgw-compute-0-vltabo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2036ee02252aa9c2636df8dbfaa1c298b21cf03906444b82293233fc35f7080/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2036ee02252aa9c2636df8dbfaa1c298b21cf03906444b82293233fc35f7080/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2036ee02252aa9c2636df8dbfaa1c298b21cf03906444b82293233fc35f7080/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2036ee02252aa9c2636df8dbfaa1c298b21cf03906444b82293233fc35f7080/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.vltabo supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:17 np0005604790 podman[89227]: 2026-02-02 09:40:17.967897824 +0000 UTC m=+0.059147938 container create 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:40:17 np0005604790 podman[89221]: 2026-02-02 09:40:17.990153177 +0000 UTC m=+0.106445369 container init 2dea2149c1564b5d8c5d4eba925c3597c22745b2ed907d0270e5352d9fbd157f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-rgw-rgw-compute-0-vltabo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:18 np0005604790 podman[89221]: 2026-02-02 09:40:17.908403907 +0000 UTC m=+0.024696119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:18 np0005604790 podman[89221]: 2026-02-02 09:40:18.006139214 +0000 UTC m=+0.122431446 container start 2dea2149c1564b5d8c5d4eba925c3597c22745b2ed907d0270e5352d9fbd157f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-rgw-rgw-compute-0-vltabo, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:18 np0005604790 bash[89221]: 2dea2149c1564b5d8c5d4eba925c3597c22745b2ed907d0270e5352d9fbd157f
Feb  2 04:40:18 np0005604790 systemd[1]: Started libpod-conmon-99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679.scope.
Feb  2 04:40:18 np0005604790 systemd[1]: Started Ceph rgw.rgw.compute-0.vltabo for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:17.951219689 +0000 UTC m=+0.042469833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:18 np0005604790 radosgw[89254]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:40:18 np0005604790 radosgw[89254]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Feb  2 04:40:18 np0005604790 radosgw[89254]: framework: beast
Feb  2 04:40:18 np0005604790 radosgw[89254]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb  2 04:40:18 np0005604790 radosgw[89254]: init_numa not setting numa affinity
Feb  2 04:40:18 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f933184a277cc72b23aff94902c0c97eb45a7b7b81834965844d8d46a87e45b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f933184a277cc72b23aff94902c0c97eb45a7b7b81834965844d8d46a87e45b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f933184a277cc72b23aff94902c0c97eb45a7b7b81834965844d8d46a87e45b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:18.091804208 +0000 UTC m=+0.183054562 container init 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:18.100619473 +0000 UTC m=+0.191869587 container start 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:18.104042454 +0000 UTC m=+0.195292588 container attach 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 4e3b93db-f639-4917-a2f6-3808ae387925 (Updating rgw.rgw deployment (+3 -> 3))
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 4e3b93db-f639-4917-a2f6-3808ae387925 (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev cbe3898e-f88f-4241-ae79-668e44e23b29 (Updating ingress.rgw.default deployment (+4 -> 4))
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.avekxu on compute-0
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.avekxu on compute-0
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/1995934692' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/1861488831' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.gzlyac/server_addr}] v 0)
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3639574610' entity='client.admin' 
Feb  2 04:40:18 np0005604790 systemd[1]: libpod-99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679.scope: Deactivated successfully.
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:18.506554698 +0000 UTC m=+0.597804852 container died 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f933184a277cc72b23aff94902c0c97eb45a7b7b81834965844d8d46a87e45b1-merged.mount: Deactivated successfully.
Feb  2 04:40:18 np0005604790 podman[89227]: 2026-02-02 09:40:18.558174225 +0000 UTC m=+0.649424379 container remove 99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679 (image=quay.io/ceph/ceph:v19, name=frosty_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:18 np0005604790 systemd[1]: libpod-conmon-99e8f3f51cb5e702daa22b2e3a89f9d7ae571956c4279926b006179640995679.scope: Deactivated successfully.
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb  2 04:40:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb  2 04:40:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v99: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb  2 04:40:18 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb  2 04:40:18 np0005604790 python3[90008]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:19 np0005604790 podman[90009]: 2026-02-02 09:40:19.061447585 +0000 UTC m=+0.109229263 container create d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:19 np0005604790 podman[90009]: 2026-02-02 09:40:18.986873987 +0000 UTC m=+0.034655735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:19 np0005604790 systemd[1]: Started libpod-conmon-d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58.scope.
Feb  2 04:40:19 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03d843859c2e9982c45398deae00d89dcb2cd2aae4dab43e76bb775089d259/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03d843859c2e9982c45398deae00d89dcb2cd2aae4dab43e76bb775089d259/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef03d843859c2e9982c45398deae00d89dcb2cd2aae4dab43e76bb775089d259/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:19 np0005604790 podman[90009]: 2026-02-02 09:40:19.164528543 +0000 UTC m=+0.212310261 container init d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:40:19 np0005604790 podman[90009]: 2026-02-02 09:40:19.175930747 +0000 UTC m=+0.223712445 container start d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 04:40:19 np0005604790 podman[90009]: 2026-02-02 09:40:19.179726229 +0000 UTC m=+0.227507987 container attach d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: Deploying daemon haproxy.rgw.default.compute-0.avekxu on compute-0
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3639574610' entity='client.admin' 
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2604604119' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb  2 04:40:19 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb  2 04:40:19 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:19 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 40 pg[10.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb  2 04:40:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 12 completed events
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2604604119' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/1995934692' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/1861488831' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2604604119' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb  2 04:40:20 np0005604790 wizardly_curran[90030]: module 'dashboard' is already disabled
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.djvyfo(active, since 2m), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:40:20 np0005604790 systemd[1]: libpod-d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58.scope: Deactivated successfully.
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v102: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:20 np0005604790 podman[90094]: 2026-02-02 09:40:20.847940985 +0000 UTC m=+0.183914626 container died d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:20 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb  2 04:40:20 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb  2 04:40:20 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ef03d843859c2e9982c45398deae00d89dcb2cd2aae4dab43e76bb775089d259-merged.mount: Deactivated successfully.
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb  2 04:40:20 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb  2 04:40:20 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 41 pg[10.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:21 np0005604790 podman[90094]: 2026-02-02 09:40:21.17363488 +0000 UTC m=+0.509608501 container remove d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58 (image=quay.io/ceph/ceph:v19, name=wizardly_curran, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:21 np0005604790 systemd[1]: libpod-conmon-d64b862598f00e32ad53dfa0a3c90f40146655565b3ccba8d0e595948d81de58.scope: Deactivated successfully.
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.24974907 +0000 UTC m=+2.550091903 container create 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 systemd[1]: Started libpod-conmon-2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b.scope.
Feb  2 04:40:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.298482569 +0000 UTC m=+2.598825422 container init 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.302976759 +0000 UTC m=+2.603319592 container start 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 nervous_khorana[90178]: 0 0
Feb  2 04:40:21 np0005604790 systemd[1]: libpod-2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b.scope: Deactivated successfully.
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.30715304 +0000 UTC m=+2.607495873 container attach 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.307438058 +0000 UTC m=+2.607780891 container died 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-873c1e8d6aac7b48dfb71ebe3253b5401c616aac746204786e362c6f7d20daa1-merged.mount: Deactivated successfully.
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.236131716 +0000 UTC m=+2.536474579 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb  2 04:40:21 np0005604790 podman[89976]: 2026-02-02 09:40:21.338815535 +0000 UTC m=+2.639158368 container remove 2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b (image=quay.io/ceph/haproxy:2.3, name=nervous_khorana)
Feb  2 04:40:21 np0005604790 systemd[1]: libpod-conmon-2939cb323f0f3600ea764b42b6dcfa41e7275a4841b3e44b2e4e5b26d994ba3b.scope: Deactivated successfully.
Feb  2 04:40:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:21 np0005604790 python3[90219]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:21 np0005604790 podman[90255]: 2026-02-02 09:40:21.50927686 +0000 UTC m=+0.034861140 container create 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:21 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2604604119' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb  2 04:40:21 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:21 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:21 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb  2 04:40:21 np0005604790 podman[90255]: 2026-02-02 09:40:21.497290661 +0000 UTC m=+0.022874961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:21 np0005604790 systemd[1]: Started libpod-conmon-44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4.scope.
Feb  2 04:40:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c65c8c60a4e75a7d98a9e5cdbe75a399a71d0b44877e32c0e229836ba12b83e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c65c8c60a4e75a7d98a9e5cdbe75a399a71d0b44877e32c0e229836ba12b83e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c65c8c60a4e75a7d98a9e5cdbe75a399a71d0b44877e32c0e229836ba12b83e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:21 np0005604790 podman[90255]: 2026-02-02 09:40:21.681910954 +0000 UTC m=+0.207495314 container init 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:40:21 np0005604790 podman[90255]: 2026-02-02 09:40:21.688966722 +0000 UTC m=+0.214551002 container start 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:21 np0005604790 podman[90255]: 2026-02-02 09:40:21.692478306 +0000 UTC m=+0.218062666 container attach 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:21 np0005604790 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.avekxu for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:40:21 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb  2 04:40:21 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb  2 04:40:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1965440456' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 podman[90386]: 2026-02-02 09:40:22.018713956 +0000 UTC m=+0.030188406 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb  2 04:40:22 np0005604790 podman[90386]: 2026-02-02 09:40:22.127785024 +0000 UTC m=+0.139259414 container create 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bdd0a853112b3ec875d866cff438e220b7eda68ea9ce4532fce7ea0b4493419/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:22 np0005604790 podman[90386]: 2026-02-02 09:40:22.185316948 +0000 UTC m=+0.196791398 container init 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:22 np0005604790 podman[90386]: 2026-02-02 09:40:22.190325042 +0000 UTC m=+0.201799402 container start 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:22 np0005604790 bash[90386]: 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190
Feb  2 04:40:22 np0005604790 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.avekxu for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:40:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [NOTICE] 032/094022 (2) : New worker #1 (4) forked
Feb  2 04:40:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094022 (4) : Server backend/rgw.rgw.compute-0.vltabo is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.txhwfs on compute-2
Feb  2 04:40:22 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.txhwfs on compute-2
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/1995934692' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1965440456' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/1861488831' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mon[74489]: from='mgr.14122 192.168.122.100:0/4293432189' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v105: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 2.5 KiB/s wr, 15 op/s
Feb  2 04:40:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094022 (4) : Server backend/rgw.rgw.compute-1.ezjvcf is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:40:22 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb  2 04:40:22 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1965440456' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  1: '-n'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  2: 'mgr.compute-0.djvyfo'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  3: '-f'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  4: '--setuser'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  5: 'ceph'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  6: '--setgroup'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  7: 'ceph'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  8: '--default-log-to-file=false'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  9: '--default-log-to-journald=true'
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.djvyfo(active, since 2m), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:40:23 np0005604790 systemd[1]: libpod-44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 podman[90255]: 2026-02-02 09:40:23.102958898 +0000 UTC m=+1.628543208 container died 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:40:23 np0005604790 systemd[1]: session-23.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-30.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-28.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 30 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd[1]: session-24.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-33.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 23 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd[1]: session-33.scope: Consumed 28.842s CPU time.
Feb  2 04:40:23 np0005604790 systemd[1]: session-25.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-32.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-27.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 28 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 24 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8c65c8c60a4e75a7d98a9e5cdbe75a399a71d0b44877e32c0e229836ba12b83e-merged.mount: Deactivated successfully.
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setuser ceph since I am not root
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setgroup ceph since I am not root
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 25 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 32 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 27 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 23.
Feb  2 04:40:23 np0005604790 systemd[1]: session-21.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-26.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd[1]: session-29.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 29 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 21 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd[1]: session-31.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 26 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Session 31 logged out. Waiting for processes to exit.
Feb  2 04:40:23 np0005604790 podman[90255]: 2026-02-02 09:40:23.161203042 +0000 UTC m=+1.686787332 container remove 44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4 (image=quay.io/ceph/ceph:v19, name=focused_bouman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 30.
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 28.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 24.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 33.
Feb  2 04:40:23 np0005604790 systemd[1]: libpod-conmon-44eddffd33600c33fe3edd00b13cfa8875d16ac16e1fbb2e7363d99a36e01ed4.scope: Deactivated successfully.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 25.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 32.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 27.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 21.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 26.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 29.
Feb  2 04:40:23 np0005604790 systemd-logind[793]: Removed session 31.
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:23.270+0000 7fbbcadb4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:23.341+0000 7fbbcadb4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:23 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094023 (4) : Server backend/rgw.rgw.compute-2.zjyufj is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [NOTICE] 032/094023 (4) : haproxy version is 2.3.17-d1c9119
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [NOTICE] 032/094023 (4) : path to executable is /usr/local/sbin/haproxy
Feb  2 04:40:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [ALERT] 032/094023 (4) : backend 'backend' has no server available!
Feb  2 04:40:23 np0005604790 python3[90475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:23 np0005604790 podman[90476]: 2026-02-02 09:40:23.633857056 +0000 UTC m=+0.054411452 container create 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:23 np0005604790 systemd[1]: Started libpod-conmon-0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160.scope.
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.101:0/1861488831' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.102:0/1995934692' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Feb  2 04:40:23 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1965440456' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb  2 04:40:23 np0005604790 podman[90476]: 2026-02-02 09:40:23.604496423 +0000 UTC m=+0.025050819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:23 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78624327a241f62c6974821efc2837410311ce08aeb4e5a8bbfc65bd38bcf3a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78624327a241f62c6974821efc2837410311ce08aeb4e5a8bbfc65bd38bcf3a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78624327a241f62c6974821efc2837410311ce08aeb4e5a8bbfc65bd38bcf3a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:23 np0005604790 podman[90476]: 2026-02-02 09:40:23.740248113 +0000 UTC m=+0.160802529 container init 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:23 np0005604790 podman[90476]: 2026-02-02 09:40:23.746736376 +0000 UTC m=+0.167290782 container start 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:40:23 np0005604790 podman[90476]: 2026-02-02 09:40:23.754086172 +0000 UTC m=+0.174640578 container attach 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:40:23 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb  2 04:40:23 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:24.100+0000 7fbbcadb4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:40:24 np0005604790 radosgw[89254]: v1 topic migration: starting v1 topic migration..
Feb  2 04:40:24 np0005604790 radosgw[89254]: LDAP not started since no server URIs were provided in the configuration.
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-rgw-rgw-compute-0-vltabo[89247]: 2026-02-02T09:40:24.533+0000 7f136d26a980 -1 LDAP not started since no server URIs were provided in the configuration.
Feb  2 04:40:24 np0005604790 radosgw[89254]: v1 topic migration: finished v1 topic migration
Feb  2 04:40:24 np0005604790 radosgw[89254]: framework: beast
Feb  2 04:40:24 np0005604790 radosgw[89254]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb  2 04:40:24 np0005604790 radosgw[89254]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: starting handler: beast
Feb  2 04:40:24 np0005604790 radosgw[89254]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:40:24 np0005604790 radosgw[89254]: mgrc service_daemon_register rgw.14388 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.vltabo,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=d5604b0e-c827-4596-94de-7709c44354e7,zone_name=default,zonegroup_id=d74d963d-58da-4c60-ad13-18a6b0033c09,zonegroup_name=default}
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-2.zjyufj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: from='client.? ' entity='client.rgw.rgw.compute-1.ezjvcf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2805705687' entity='client.rgw.rgw.compute-0.vltabo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:24.729+0000 7fbbcadb4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:24.882+0000 7fbbcadb4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:40:24 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Feb  2 04:40:24 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb  2 04:40:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:24.948+0000 7fbbcadb4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:24 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:40:24 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:40:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:25.084+0000 7fbbcadb4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:40:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:25 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:40:25 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Feb  2 04:40:25 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.066+0000 7fbbcadb4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:40:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:40:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:26.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:40:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:26.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.276+0000 7fbbcadb4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.345+0000 7fbbcadb4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.404+0000 7fbbcadb4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.490+0000 7fbbcadb4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.569+0000 7fbbcadb4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.859+0000 7fbbcadb4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094026 (4) : Server backend/rgw.rgw.compute-1.ezjvcf is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:40:26 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb  2 04:40:26 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb  2 04:40:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:26.949+0000 7fbbcadb4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:26 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:40:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:27.354+0000 7fbbcadb4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:40:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094027 (4) : Server backend/rgw.rgw.compute-2.zjyufj is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:40:27 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb  2 04:40:27 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb  2 04:40:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:27.893+0000 7fbbcadb4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:40:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:27.965+0000 7fbbcadb4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:27 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.044+0000 7fbbcadb4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:40:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.181+0000 7fbbcadb4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:40:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu[90403]: [WARNING] 032/094028 (4) : Server backend/rgw.rgw.compute-0.vltabo is UP, reason: Layer7 check passed, code: 200, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:40:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.246+0000 7fbbcadb4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.383+0000 7fbbcadb4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl restarted
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl started
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.582+0000 7fbbcadb4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.820+0000 7fbbcadb4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:40:28 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb  2 04:40:28 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac restarted
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac started
Feb  2 04:40:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:28.882+0000 7fbbcadb4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x5590377cf860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.djvyfo(active, starting, since 0.077214s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map Activating!
Feb  2 04:40:28 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map I am now activating
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 04:40:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Manager daemon compute-0.djvyfo is now available
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: balancer
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [balancer INFO root] Starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:40:29
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: cephadm
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: crash
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: dashboard
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: devicehealth
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO access_control] Loading user roles DB version=2
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO sso] Loading SSO DB version=1
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: iostat
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: nfs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: orchestrator
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: pg_autoscaler
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: progress
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [progress INFO root] Loading...
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fbb51153580>, <progress.module.GhostEvent object at 0x7fbb511537c0>, <progress.module.GhostEvent object at 0x7fbb511537f0>, <progress.module.GhostEvent object at 0x7fbb51153820>, <progress.module.GhostEvent object at 0x7fbb51153850>, <progress.module.GhostEvent object at 0x7fbb51153880>, <progress.module.GhostEvent object at 0x7fbb511538b0>, <progress.module.GhostEvent object at 0x7fbb511538e0>, <progress.module.GhostEvent object at 0x7fbb51153910>, <progress.module.GhostEvent object at 0x7fbb51153940>, <progress.module.GhostEvent object at 0x7fbb51153970>, <progress.module.GhostEvent object at 0x7fbb511539a0>] historic events
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] recovery thread starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] starting setup
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"} v 0)
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: rbd_support
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: restful
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: status
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [restful INFO root] server_addr: :: server_port: 8003
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: telemetry
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [restful WARNING root] server not running: no certificate configured
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: volumes
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] PerfHandler: starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: images, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TaskHandler: starting
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"} v 0)
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] setup complete
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb  2 04:40:29 np0005604790 systemd-logind[793]: New session 34 of user ceph-admin.
Feb  2 04:40:29 np0005604790 systemd[1]: Started Session 34 of User ceph-admin.
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.module] Engine started.
Feb  2 04:40:29 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb  2 04:40:29 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb  2 04:40:29 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.djvyfo(active, since 1.10632s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 condescending_lichterman[90495]: Option GRAFANA_API_USERNAME updated
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: Manager daemon compute-0.djvyfo is now available
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 podman[90815]: 2026-02-02 09:40:30.026324876 +0000 UTC m=+0.072114606 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Feb  2 04:40:30 np0005604790 systemd[1]: libpod-0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160.scope: Deactivated successfully.
Feb  2 04:40:30 np0005604790 podman[90476]: 2026-02-02 09:40:30.030713921 +0000 UTC m=+6.451268337 container died 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:40:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-78624327a241f62c6974821efc2837410311ce08aeb4e5a8bbfc65bd38bcf3a6-merged.mount: Deactivated successfully.
Feb  2 04:40:30 np0005604790 podman[90476]: 2026-02-02 09:40:30.061117576 +0000 UTC m=+6.481671972 container remove 0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160 (image=quay.io/ceph/ceph:v19, name=condescending_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:30 np0005604790 systemd[1]: libpod-conmon-0f491eeeffc4790c78f5b481558ca67f9140a686153c121337baac66a4956160.scope: Deactivated successfully.
Feb  2 04:40:30 np0005604790 podman[90815]: 2026-02-02 09:40:30.112790746 +0000 UTC m=+0.158580476 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:30.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:30] ENGINE Bus STARTING
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:30] ENGINE Bus STARTING
Feb  2 04:40:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:30] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:30] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:30] ENGINE Client ('192.168.122.100', 53184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:30] ENGINE Client ('192.168.122.100', 53184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:30 np0005604790 python3[90914]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:30] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:30] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:30] ENGINE Bus STARTED
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:30] ENGINE Bus STARTED
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.399620134 +0000 UTC m=+0.057721670 container create 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 04:40:30 np0005604790 systemd[1]: Started libpod-conmon-69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073.scope.
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.374430116 +0000 UTC m=+0.032531662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:30 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3701ef991d947aeaead58517a48c98c3d6b555636f4b312ea7ef642c8aede87/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3701ef991d947aeaead58517a48c98c3d6b555636f4b312ea7ef642c8aede87/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3701ef991d947aeaead58517a48c98c3d6b555636f4b312ea7ef642c8aede87/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.499990278 +0000 UTC m=+0.158091834 container init 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.507299919 +0000 UTC m=+0.165401465 container start 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.511607651 +0000 UTC m=+0.169709217 container attach 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:40:30 np0005604790 podman[91022]: 2026-02-02 09:40:30.586757096 +0000 UTC m=+0.073161164 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:30 np0005604790 podman[91022]: 2026-02-02 09:40:30.59686711 +0000 UTC m=+0.083271128 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:30 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb  2 04:40:30 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:30 np0005604790 objective_nobel[91007]: Option GRAFANA_API_PASSWORD updated
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 04:40:30 np0005604790 systemd[1]: libpod-69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073.scope: Deactivated successfully.
Feb  2 04:40:30 np0005604790 podman[90961]: 2026-02-02 09:40:30.97598795 +0000 UTC m=+0.634089496 container died 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:40:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:40:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a3701ef991d947aeaead58517a48c98c3d6b555636f4b312ea7ef642c8aede87-merged.mount: Deactivated successfully.
Feb  2 04:40:31 np0005604790 podman[90961]: 2026-02-02 09:40:31.015116513 +0000 UTC m=+0.673218029 container remove 69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073 (image=quay.io/ceph/ceph:v19, name=objective_nobel, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:40:31 np0005604790 systemd[1]: libpod-conmon-69216d2dc4fe68d8acef64d484d1f11a78c8b5ec381d199cb17364c5f835f073.scope: Deactivated successfully.
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:30] ENGINE Bus STARTING
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: Cluster is now healthy
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 04:40:31 np0005604790 python3[91205]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.44579384 +0000 UTC m=+0.040609232 container create 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:31 np0005604790 systemd[1]: Started libpod-conmon-1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a.scope.
Feb  2 04:40:31 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a1799b2f4118e42374b15fcd845cce5eeb3c0b27efa441cad3464ae8d0caeb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a1799b2f4118e42374b15fcd845cce5eeb3c0b27efa441cad3464ae8d0caeb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a1799b2f4118e42374b15fcd845cce5eeb3c0b27efa441cad3464ae8d0caeb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.426956188 +0000 UTC m=+0.021771600 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.524232941 +0000 UTC m=+0.119048343 container init 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.532963059 +0000 UTC m=+0.127778491 container start 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.536711257 +0000 UTC m=+0.131527019 container attach 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.djvyfo(active, since 2s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:31 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb  2 04:40:31 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb  2 04:40:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14457 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:31 np0005604790 exciting_galileo[91272]: Option ALERTMANAGER_API_HOST updated
Feb  2 04:40:31 np0005604790 systemd[1]: libpod-1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a.scope: Deactivated successfully.
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.943405098 +0000 UTC m=+0.538220490 container died 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:40:31 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e3a1799b2f4118e42374b15fcd845cce5eeb3c0b27efa441cad3464ae8d0caeb-merged.mount: Deactivated successfully.
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb  2 04:40:31 np0005604790 podman[91256]: 2026-02-02 09:40:31.986264138 +0000 UTC m=+0.581079530 container remove 1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a (image=quay.io/ceph/ceph:v19, name=exciting_galileo, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb  2 04:40:32 np0005604790 systemd[1]: libpod-conmon-1dbf9857e390e9757425777b1e6e85d5549a0ea4dc03d5e8aaf5e4ec513f265a.scope: Deactivated successfully.
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.1b( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.2( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.7( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.091526985s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.397750854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.091498375s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.397750854s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.689096451s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995605469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.085495949s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.392265320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.085475922s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.392265320s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.688721657s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995544434s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.688703537s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995544434s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.688554764s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995498657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.688541412s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995498657s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.085277557s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.392265320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.085262299s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.392265320s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090620995s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.397720337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090606689s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.397720337s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687962532s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995101929s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.13( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687948227s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995101929s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.19( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.688710213s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995605469s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090377808s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.397743225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090359688s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.397743225s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687541962s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995063782s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090433121s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.397979736s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687522888s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995063782s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090420723s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.397979736s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090361595s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398078918s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090347290s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398078918s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687178612s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.994934082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089864731s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.397651672s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687155724s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.994934082s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090285301s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398117065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089817047s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.397651672s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.10( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687206268s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.995147705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686706543s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.994667053s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090167999s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398117065s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686686516s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.994667053s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090141296s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398193359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686635971s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.994735718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090988159s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.399101257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090121269s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398193359s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.090972900s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.399101257s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686490059s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.994659424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686469078s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.994659424s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686619759s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.994735718s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.10( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.687001228s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.995147705s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089838982s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398223877s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089978218s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398391724s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686442375s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.994682312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089962959s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398391724s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089818001s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398223877s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089804649s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398300171s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089773178s) [2] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398300171s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.686241150s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.994682312s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682697296s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991310120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682674408s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991310120s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089781761s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398445129s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089766502s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398445129s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682385445s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991188049s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682364464s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991188049s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682364464s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991302490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682245255s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991302490s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089624405s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398773193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682447433s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991401672s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.682214737s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991401672s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089589119s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398773193s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681815147s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991081238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681801796s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991081238s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681726456s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991065979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089662552s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.399040222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681701660s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991065979s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089652061s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.399040222s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681241035s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.990722656s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681229591s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.990722656s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681106567s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.990638733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089301109s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398941040s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089179993s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398818970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089281082s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398941040s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681040764s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.990638733s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089156151s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398818970s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681702614s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991333008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681509972s) [2] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991333008s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089106560s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398948669s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.089091301s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398948669s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681381226s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.991264343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1e( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.681359291s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.991264343s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.088914871s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active pruub 92.398880005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.680587769s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 active pruub 96.990623474s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.1f( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[2.1f( empty local-lis/les=29/31 n=0 ec=29/14 lis/c=29/29 les/c/f=31/31/0 sis=46 pruub=15.680561066s) [0] r=-1 lpr=46 pi=[29,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 96.990623474s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=33/34 n=0 ec=33/19 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.088831902s) [0] r=-1 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398880005s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.10( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.1c( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.15( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.1a( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.18( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.19( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.1b( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.e( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.1a( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.c( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.5( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.a( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.a( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[6.15( empty local-lis/les=0/0 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 46 pg[4.13( empty local-lis/les=0/0 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:30] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:30] ENGINE Client ('192.168.122.100', 53184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:30] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:30] ENGINE Bus STARTED
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:40:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:32.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 python3[91506]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.365794169 +0000 UTC m=+0.046897257 container create 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:40:32 np0005604790 systemd[1]: Started libpod-conmon-99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c.scope.
Feb  2 04:40:32 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.343050214 +0000 UTC m=+0.024153402 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fe78d6f489aa066a55c02c958a4cb19d7e1eec83059c13bf33530c9e4fa1c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fe78d6f489aa066a55c02c958a4cb19d7e1eec83059c13bf33530c9e4fa1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fe78d6f489aa066a55c02c958a4cb19d7e1eec83059c13bf33530c9e4fa1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.467467476 +0000 UTC m=+0.148570594 container init 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.473181886 +0000 UTC m=+0.154284964 container start 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.476904193 +0000 UTC m=+0.158007311 container attach 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb  2 04:40:32 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:32 np0005604790 pensive_khayyam[91635]: Option PROMETHEUS_API_HOST updated
Feb  2 04:40:32 np0005604790 systemd[1]: libpod-99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c.scope: Deactivated successfully.
Feb  2 04:40:32 np0005604790 podman[91574]: 2026-02-02 09:40:32.975728432 +0000 UTC m=+0.656831530 container died 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb  2 04:40:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5e8fe78d6f489aa066a55c02c958a4cb19d7e1eec83059c13bf33530c9e4fa1c-merged.mount: Deactivated successfully.
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.1b( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.1a( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.18( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.18( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.1a( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.1b( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.c( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.e( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.f( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.3( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.5( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.1c( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.e( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.3( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.5( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.7( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.5( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.1( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.a( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.d( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.a( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.7( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.9( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.a( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.16( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.2( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.15( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.c( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[4.13( empty local-lis/les=46/47 n=0 ec=31/16 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.11( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.10( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=46/47 n=0 ec=33/18 lis/c=33/33 les/c/f=35/35/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.1f( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=46/47 n=0 ec=29/15 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 47 pg[5.15( empty local-lis/les=46/47 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=46) [1] r=0 lpr=46 pi=[31,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:33 np0005604790 podman[91574]: 2026-02-02 09:40:33.132983553 +0000 UTC m=+0.814086641 container remove 99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c (image=quay.io/ceph/ceph:v19, name=pensive_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.djvyfo(active, since 4s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:33 np0005604790 systemd[1]: libpod-conmon-99145266abb75261b7084c0798f2622db7cde526524dcd44552de5e97dd3ab8c.scope: Deactivated successfully.
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 python3[92098]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:33 np0005604790 podman[92171]: 2026-02-02 09:40:33.493409703 +0000 UTC m=+0.062408571 container create d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:33 np0005604790 systemd[1]: Started libpod-conmon-d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749.scope.
Feb  2 04:40:33 np0005604790 podman[92171]: 2026-02-02 09:40:33.459814795 +0000 UTC m=+0.028813673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:33 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9ad8a834a3f75673c9fdeaac0bf9c19b22a12b86fb9d4d6fd4ca98d2af797/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9ad8a834a3f75673c9fdeaac0bf9c19b22a12b86fb9d4d6fd4ca98d2af797/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9ad8a834a3f75673c9fdeaac0bf9c19b22a12b86fb9d4d6fd4ca98d2af797/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:33 np0005604790 podman[92171]: 2026-02-02 09:40:33.608226894 +0000 UTC m=+0.177225842 container init d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:40:33 np0005604790 podman[92171]: 2026-02-02 09:40:33.613455901 +0000 UTC m=+0.182454739 container start d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:33 np0005604790 podman[92171]: 2026-02-02 09:40:33.617107356 +0000 UTC m=+0.186106304 container attach d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Feb  2 04:40:33 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:40:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev cde8a977-2c3a-45d8-a815-c5f294e35b3c (Updating node-exporter deployment (+3 -> 3))
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Feb  2 04:40:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Feb  2 04:40:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14469 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 gallant_wescoff[92227]: Option GRAFANA_API_URL updated
Feb  2 04:40:34 np0005604790 systemd[1]: libpod-d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749.scope: Deactivated successfully.
Feb  2 04:40:34 np0005604790 podman[92171]: 2026-02-02 09:40:34.038018099 +0000 UTC m=+0.607016977 container died d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:40:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d9c9ad8a834a3f75673c9fdeaac0bf9c19b22a12b86fb9d4d6fd4ca98d2af797-merged.mount: Deactivated successfully.
Feb  2 04:40:34 np0005604790 podman[92171]: 2026-02-02 09:40:34.076668269 +0000 UTC m=+0.645667107 container remove d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749 (image=quay.io/ceph/ceph:v19, name=gallant_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:34 np0005604790 systemd[1]: libpod-conmon-d6f8137284e90fe4191faa33aa302181dc58403eb3d4de26179f3cbe2f323749.scope: Deactivated successfully.
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 ceph-mon[74489]: from='mgr.24202 192.168.122.100:0/2430780993' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:40:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:40:34 np0005604790 python3[92448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:34 np0005604790 podman[92477]: 2026-02-02 09:40:34.480033253 +0000 UTC m=+0.055306377 container create aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:34 np0005604790 systemd[1]: Started libpod-conmon-aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce.scope.
Feb  2 04:40:34 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:34 np0005604790 podman[92477]: 2026-02-02 09:40:34.458404878 +0000 UTC m=+0.033678012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:34 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:34 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a78faba7b1e54f5a3c96e31fe1fe3d91cdf794bd9b38e79ea3796c58c8b001e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a78faba7b1e54f5a3c96e31fe1fe3d91cdf794bd9b38e79ea3796c58c8b001e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a78faba7b1e54f5a3c96e31fe1fe3d91cdf794bd9b38e79ea3796c58c8b001e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:34 np0005604790 podman[92477]: 2026-02-02 09:40:34.7882862 +0000 UTC m=+0.363559354 container init aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb  2 04:40:34 np0005604790 podman[92477]: 2026-02-02 09:40:34.797210694 +0000 UTC m=+0.372483828 container start aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:34 np0005604790 podman[92477]: 2026-02-02 09:40:34.802187384 +0000 UTC m=+0.377460528 container attach aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:40:34 np0005604790 systemd[1]: Reloading.
Feb  2 04:40:34 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Feb  2 04:40:34 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:40:34 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Feb  2 04:40:34 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:40:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 14 op/s
Feb  2 04:40:35 np0005604790 systemd[1]: Starting Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: Deploying daemon node-exporter.compute-0 on compute-0
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3702593450' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb  2 04:40:35 np0005604790 bash[92650]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Feb  2 04:40:35 np0005604790 bash[92650]: Getting image source signatures
Feb  2 04:40:35 np0005604790 bash[92650]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Feb  2 04:40:35 np0005604790 bash[92650]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Feb  2 04:40:35 np0005604790 bash[92650]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Feb  2 04:40:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:35 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb  2 04:40:35 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb  2 04:40:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:36 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3702593450' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Feb  2 04:40:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3702593450' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  1: '-n'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  2: 'mgr.compute-0.djvyfo'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  3: '-f'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  4: '--setuser'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  5: 'ceph'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  6: '--setgroup'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  7: 'ceph'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  8: '--default-log-to-file=false'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  9: '--default-log-to-journald=true'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr respawn  exe_path /proc/self/exe
Feb  2 04:40:36 np0005604790 systemd[1]: libpod-aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce.scope: Deactivated successfully.
Feb  2 04:40:36 np0005604790 podman[92477]: 2026-02-02 09:40:36.247158315 +0000 UTC m=+1.822431409 container died aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:40:36 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.djvyfo(active, since 7s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:36 np0005604790 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setuser ceph since I am not root
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setgroup ceph since I am not root
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:40:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5a78faba7b1e54f5a3c96e31fe1fe3d91cdf794bd9b38e79ea3796c58c8b001e-merged.mount: Deactivated successfully.
Feb  2 04:40:36 np0005604790 bash[92650]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Feb  2 04:40:36 np0005604790 bash[92650]: Writing manifest to image destination
Feb  2 04:40:36 np0005604790 podman[92477]: 2026-02-02 09:40:36.403300876 +0000 UTC m=+1.978573990 container remove aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce (image=quay.io/ceph/ceph:v19, name=fervent_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:40:36 np0005604790 systemd[1]: libpod-conmon-aa4e983b7f4d71cc3f47fedb7f5a11a05ed9549a6f9fe8f941c48e788549f4ce.scope: Deactivated successfully.
Feb  2 04:40:36 np0005604790 podman[92650]: 2026-02-02 09:40:36.435107878 +0000 UTC m=+1.191676321 container create 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:36.446+0000 7fe9bbe03140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:40:36 np0005604790 podman[92650]: 2026-02-02 09:40:36.421086831 +0000 UTC m=+1.177655294 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Feb  2 04:40:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ca6510a9460a1e052ad07c489d79b8258b63c4ea4a96f9763769e3f473d4ed/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:36 np0005604790 podman[92650]: 2026-02-02 09:40:36.500435065 +0000 UTC m=+1.257003508 container init 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:40:36 np0005604790 podman[92650]: 2026-02-02 09:40:36.50443403 +0000 UTC m=+1.261002473 container start 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:40:36 np0005604790 bash[92650]: 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.512Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.512Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Feb  2 04:40:36 np0005604790 systemd[1]: Started Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.512Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.512Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=arp
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=bcache
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=bonding
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=btrfs
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=conntrack
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=cpu
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=diskstats
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=dmi
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=edac
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=entropy
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=filefd
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=filesystem
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=hwmon
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=infiniband
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=ipvs
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=loadavg
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=mdadm
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=meminfo
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=netclass
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=netdev
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=netstat
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=nfs
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=nfsd
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=nvme
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=os
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=pressure
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=rapl
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=schedstat
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=selinux
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=sockstat
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=softnet
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=stat
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=tapestats
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=textfile
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=thermal_zone
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=time
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=uname
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=vmstat
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=xfs
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.513Z caller=node_exporter.go:117 level=info collector=zfs
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.514Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[92760]: ts=2026-02-02T09:40:36.514Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Feb  2 04:40:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:36.528+0000 7fe9bbe03140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:36 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:40:36 np0005604790 systemd[1]: session-34.scope: Deactivated successfully.
Feb  2 04:40:36 np0005604790 systemd[1]: session-34.scope: Consumed 4.894s CPU time.
Feb  2 04:40:36 np0005604790 systemd-logind[793]: Removed session 34.
Feb  2 04:40:36 np0005604790 python3[92794]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:36 np0005604790 podman[92795]: 2026-02-02 09:40:36.825996995 +0000 UTC m=+0.063193603 container create 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 04:40:36 np0005604790 systemd[1]: Started libpod-conmon-084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f.scope.
Feb  2 04:40:36 np0005604790 podman[92795]: 2026-02-02 09:40:36.797639484 +0000 UTC m=+0.034836172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:36 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068a2692ed9160608d3c813d95f2f256050de243ccc32ae8e14600b47c25bd3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068a2692ed9160608d3c813d95f2f256050de243ccc32ae8e14600b47c25bd3d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068a2692ed9160608d3c813d95f2f256050de243ccc32ae8e14600b47c25bd3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:36 np0005604790 podman[92795]: 2026-02-02 09:40:36.90803763 +0000 UTC m=+0.145234248 container init 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:36 np0005604790 podman[92795]: 2026-02-02 09:40:36.914479088 +0000 UTC m=+0.151675676 container start 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:40:36 np0005604790 podman[92795]: 2026-02-02 09:40:36.918163754 +0000 UTC m=+0.155360362 container attach 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:36 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb  2 04:40:36 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:40:37 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/3702593450' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:37.268+0000 7fe9bbe03140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:40:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Feb  2 04:40:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2174886532' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:37.814+0000 7fe9bbe03140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:40:37 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:40:37 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb  2 04:40:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:37.978+0000 7fe9bbe03140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:37 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:40:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:38.079+0000 7fe9bbe03140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:40:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:38.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:40:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:38.211+0000 7fe9bbe03140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:40:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:38.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:38 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2174886532' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Feb  2 04:40:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2174886532' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb  2 04:40:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.djvyfo(active, since 9s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:38 np0005604790 systemd[1]: libpod-084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f.scope: Deactivated successfully.
Feb  2 04:40:38 np0005604790 podman[92795]: 2026-02-02 09:40:38.295042994 +0000 UTC m=+1.532239592 container died 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-068a2692ed9160608d3c813d95f2f256050de243ccc32ae8e14600b47c25bd3d-merged.mount: Deactivated successfully.
Feb  2 04:40:38 np0005604790 podman[92795]: 2026-02-02 09:40:38.335777439 +0000 UTC m=+1.572974047 container remove 084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f (image=quay.io/ceph/ceph:v19, name=musing_ellis, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Feb  2 04:40:38 np0005604790 systemd[1]: libpod-conmon-084ea717dafbc85454d02705abbe8580d6bc02f58046ab4d6241c2a7327b134f.scope: Deactivated successfully.
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:40:38 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:40:38 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb  2 04:40:38 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb  2 04:40:39 np0005604790 python3[92932]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.132+0000 7fe9bbe03140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:40:39 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2174886532' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.323+0000 7fe9bbe03140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:40:39 np0005604790 python3[93003]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025238.7786765-37414-156868179420251/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.392+0000 7fe9bbe03140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.450+0000 7fe9bbe03140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.519+0000 7fe9bbe03140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.581+0000 7fe9bbe03140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:40:39 np0005604790 python3[93053]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:39.918+0000 7fe9bbe03140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:39 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:40:39 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb  2 04:40:39 np0005604790 podman[93054]: 2026-02-02 09:40:39.971892476 +0000 UTC m=+0.059688751 container create 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:40:39 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb  2 04:40:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:40.004+0000 7fe9bbe03140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:40:40 np0005604790 systemd[1]: Started libpod-conmon-402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66.scope.
Feb  2 04:40:40 np0005604790 podman[93054]: 2026-02-02 09:40:39.938225206 +0000 UTC m=+0.026021541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f85f1af659ab9c8219ca917ce7b83cdcbbbb7e3c5cf5db3ad2c513c9df1437/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f85f1af659ab9c8219ca917ce7b83cdcbbbb7e3c5cf5db3ad2c513c9df1437/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f85f1af659ab9c8219ca917ce7b83cdcbbbb7e3c5cf5db3ad2c513c9df1437/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:40 np0005604790 podman[93054]: 2026-02-02 09:40:40.073871932 +0000 UTC m=+0.161668257 container init 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:40:40 np0005604790 podman[93054]: 2026-02-02 09:40:40.078825532 +0000 UTC m=+0.166621817 container start 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:40:40 np0005604790 podman[93054]: 2026-02-02 09:40:40.086375749 +0000 UTC m=+0.174172074 container attach 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:40:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:40.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:40.431+0000 7fe9bbe03140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:40:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:40.959+0000 7fe9bbe03140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:40 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:40:40 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb  2 04:40:41 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.029+0000 7fe9bbe03140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.106+0000 7fe9bbe03140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.245+0000 7fe9bbe03140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.310+0000 7fe9bbe03140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.449+0000 7fe9bbe03140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.660+0000 7fe9bbe03140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac restarted
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac started
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.914+0000 7fe9bbe03140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl restarted
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl started
Feb  2 04:40:41 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb  2 04:40:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:41.979+0000 7fe9bbe03140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb  2 04:40:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x55cf734d3860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  1: '-n'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  2: 'mgr.compute-0.djvyfo'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  3: '-f'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  4: '--setuser'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  5: 'ceph'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  6: '--setgroup'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  7: 'ceph'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  8: '--default-log-to-file=false'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  9: '--default-log-to-journald=true'
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  10: '--default-log-to-stderr=false'
Feb  2 04:40:41 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb  2 04:40:41 np0005604790 ceph-mgr[74785]: mgr respawn  exe_path /proc/self/exe
Feb  2 04:40:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb  2 04:40:42 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb  2 04:40:42 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.djvyfo(active, starting, since 0.0330787s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setuser ceph since I am not root
Feb  2 04:40:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setgroup ceph since I am not root
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:40:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:42.173+0000 7f1f376af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:40:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:42.283+0000 7f1f376af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:40:42 np0005604790 ceph-mon[74489]: Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:42 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:40:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:42.977+0000 7f1f376af140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:40:42 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:40:43 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Feb  2 04:40:43 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:43.530+0000 7f1f376af140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:43.668+0000 7f1f376af140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:43.730+0000 7f1f376af140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:40:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:43.850+0000 7f1f376af140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:40:43 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:40:44 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb  2 04:40:44 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb  2 04:40:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:40:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:40:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:44.718+0000 7f1f376af140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:40:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:44.936+0000 7f1f376af140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:44 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:40:44 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb  2 04:40:45 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.010+0000 7f1f376af140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.075+0000 7f1f376af140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.142+0000 7f1f376af140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.205+0000 7f1f376af140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.517+0000 7f1f376af140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:40:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:45.612+0000 7f1f376af140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:40:45 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:40:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.011+0000 7f1f376af140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:40:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb  2 04:40:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:46.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.526+0000 7f1f376af140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.593+0000 7f1f376af140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.666+0000 7f1f376af140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:40:46 np0005604790 systemd[1]: Stopping User Manager for UID 42477...
Feb  2 04:40:46 np0005604790 systemd[75816]: Activating special unit Exit the Session...
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped target Main User Target.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped target Basic System.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped target Paths.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped target Sockets.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped target Timers.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 04:40:46 np0005604790 systemd[75816]: Closed D-Bus User Message Bus Socket.
Feb  2 04:40:46 np0005604790 systemd[75816]: Stopped Create User's Volatile Files and Directories.
Feb  2 04:40:46 np0005604790 systemd[75816]: Removed slice User Application Slice.
Feb  2 04:40:46 np0005604790 systemd[75816]: Reached target Shutdown.
Feb  2 04:40:46 np0005604790 systemd[75816]: Finished Exit the Session.
Feb  2 04:40:46 np0005604790 systemd[75816]: Reached target Exit the Session.
Feb  2 04:40:46 np0005604790 systemd[1]: user@42477.service: Deactivated successfully.
Feb  2 04:40:46 np0005604790 systemd[1]: Stopped User Manager for UID 42477.
Feb  2 04:40:46 np0005604790 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.819+0000 7f1f376af140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:40:46 np0005604790 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb  2 04:40:46 np0005604790 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb  2 04:40:46 np0005604790 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb  2 04:40:46 np0005604790 systemd[1]: Removed slice User Slice of UID 42477.
Feb  2 04:40:46 np0005604790 systemd[1]: user-42477.slice: Consumed 35.316s CPU time.
Feb  2 04:40:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:46.885+0000 7f1f376af140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:40:46 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:40:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb  2 04:40:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb  2 04:40:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:47.031+0000 7f1f376af140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:40:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:47.244+0000 7f1f376af140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac restarted
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac started
Feb  2 04:40:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:47.508+0000 7f1f376af140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:40:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:40:47.571+0000 7f1f376af140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x55b0e6cdf860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map Activating!
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map I am now activating
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.djvyfo(active, starting, since 0.0410296s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e1 all = 1
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: balancer
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Manager daemon compute-0.djvyfo is now available
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:40:47
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: cephadm
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: crash
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: dashboard
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [dashboard INFO access_control] Loading user roles DB version=2
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: devicehealth
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: iostat
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [dashboard INFO sso] Loading SSO DB version=1
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: nfs
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: orchestrator
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: pg_autoscaler
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: progress
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [progress INFO root] Loading...
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f1ebda74ee0>, <progress.module.GhostEvent object at 0x7f1ebda74f10>, <progress.module.GhostEvent object at 0x7f1ebda74f40>, <progress.module.GhostEvent object at 0x7f1ebda74f70>, <progress.module.GhostEvent object at 0x7f1ebda74fa0>, <progress.module.GhostEvent object at 0x7f1ebda74fd0>, <progress.module.GhostEvent object at 0x7f1eb51f2040>, <progress.module.GhostEvent object at 0x7f1eb51f2070>, <progress.module.GhostEvent object at 0x7f1eb51f20a0>, <progress.module.GhostEvent object at 0x7f1eb51f20d0>, <progress.module.GhostEvent object at 0x7f1eb51f2100>, <progress.module.GhostEvent object at 0x7f1eb51f2130>] historic events
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] recovery thread starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] starting setup
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: rbd_support
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: restful
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: status
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: telemetry
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [restful INFO root] server_addr: :: server_port: 8003
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [restful WARNING root] server not running: no certificate configured
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] PerfHandler: starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: images, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: volumes
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TaskHandler: starting
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"} v 0)
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 04:40:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] setup complete
Feb  2 04:40:47 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Feb  2 04:40:47 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Feb  2 04:40:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl restarted
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl started
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb  2 04:40:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:48 np0005604790 systemd[1]: Created slice User Slice of UID 42477.
Feb  2 04:40:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:48.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:48 np0005604790 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb  2 04:40:48 np0005604790 systemd-logind[793]: New session 35 of user ceph-admin.
Feb  2 04:40:48 np0005604790 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb  2 04:40:48 np0005604790 systemd[1]: Starting User Manager for UID 42477...
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.module] Engine started.
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: Active manager daemon compute-0.djvyfo restarted
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: Manager daemon compute-0.djvyfo is now available
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:40:48 np0005604790 systemd[93258]: Queued start job for default target Main User Target.
Feb  2 04:40:48 np0005604790 systemd[93258]: Created slice User Application Slice.
Feb  2 04:40:48 np0005604790 systemd[93258]: Started Mark boot as successful after the user session has run 2 minutes.
Feb  2 04:40:48 np0005604790 systemd[93258]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 04:40:48 np0005604790 systemd[93258]: Reached target Paths.
Feb  2 04:40:48 np0005604790 systemd[93258]: Reached target Timers.
Feb  2 04:40:48 np0005604790 systemd[93258]: Starting D-Bus User Message Bus Socket...
Feb  2 04:40:48 np0005604790 systemd[93258]: Starting Create User's Volatile Files and Directories...
Feb  2 04:40:48 np0005604790 systemd[93258]: Listening on D-Bus User Message Bus Socket.
Feb  2 04:40:48 np0005604790 systemd[93258]: Reached target Sockets.
Feb  2 04:40:48 np0005604790 systemd[93258]: Finished Create User's Volatile Files and Directories.
Feb  2 04:40:48 np0005604790 systemd[93258]: Reached target Basic System.
Feb  2 04:40:48 np0005604790 systemd[93258]: Reached target Main User Target.
Feb  2 04:40:48 np0005604790 systemd[93258]: Startup finished in 163ms.
Feb  2 04:40:48 np0005604790 systemd[1]: Started User Manager for UID 42477.
Feb  2 04:40:48 np0005604790 systemd[1]: Started Session 35 of User ceph-admin.
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.djvyfo(active, since 1.07277s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 04:40:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0[74485]: 2026-02-02T09:40:48.655+0000 7fda4b3dd640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e2 new map
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-02T09:40:48:656641+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:40:48.656583+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:40:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:48 np0005604790 ceph-mgr[74785]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb  2 04:40:48 np0005604790 systemd[1]: libpod-402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66.scope: Deactivated successfully.
Feb  2 04:40:48 np0005604790 podman[93054]: 2026-02-02 09:40:48.721674318 +0000 UTC m=+8.809470663 container died 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-30f85f1af659ab9c8219ca917ce7b83cdcbbbb7e3c5cf5db3ad2c513c9df1437-merged.mount: Deactivated successfully.
Feb  2 04:40:48 np0005604790 podman[93054]: 2026-02-02 09:40:48.771265265 +0000 UTC m=+8.859061540 container remove 402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66 (image=quay.io/ceph/ceph:v19, name=inspiring_goodall, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:48 np0005604790 systemd[1]: libpod-conmon-402036c38f3a53fae7fede1dfb1691cae8c2a161059643c5d57abc925a094a66.scope: Deactivated successfully.
Feb  2 04:40:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb  2 04:40:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb  2 04:40:49 np0005604790 python3[93386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.15784438 +0000 UTC m=+0.045757187 container create 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:40:49 np0005604790 systemd[1]: Started libpod-conmon-58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216.scope.
Feb  2 04:40:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb6a16d0cd2c2d8050e4d2f3f29f115253d1b027ae0bd8c873013716cb3ac2b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb6a16d0cd2c2d8050e4d2f3f29f115253d1b027ae0bd8c873013716cb3ac2b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb6a16d0cd2c2d8050e4d2f3f29f115253d1b027ae0bd8c873013716cb3ac2b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.138367171 +0000 UTC m=+0.026279998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.237569884 +0000 UTC m=+0.125482731 container init 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.244835514 +0000 UTC m=+0.132748321 container start 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.250011929 +0000 UTC m=+0.137924776 container attach 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:49 np0005604790 podman[93452]: 2026-02-02 09:40:49.25158942 +0000 UTC m=+0.053445298 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:49 np0005604790 podman[93452]: 2026-02-02 09:40:49.363868275 +0000 UTC m=+0.165724183 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:49] ENGINE Bus STARTING
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:49] ENGINE Bus STARTING
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:49] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:49] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:49] ENGINE Client ('192.168.122.100', 55968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:49] ENGINE Client ('192.168.122.100', 55968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:49 np0005604790 sad_dijkstra[93450]: Scheduled mds.cephfs update...
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:49 np0005604790 systemd[1]: libpod-58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216.scope: Deactivated successfully.
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.666793223 +0000 UTC m=+0.554706040 container died 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-acb6a16d0cd2c2d8050e4d2f3f29f115253d1b027ae0bd8c873013716cb3ac2b-merged.mount: Deactivated successfully.
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:49] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:49] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:40:49] ENGINE Bus STARTED
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:40:49] ENGINE Bus STARTED
Feb  2 04:40:49 np0005604790 podman[93406]: 2026-02-02 09:40:49.717374676 +0000 UTC m=+0.605287483 container remove 58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216 (image=quay.io/ceph/ceph:v19, name=sad_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:40:49 np0005604790 systemd[1]: libpod-conmon-58f843fcbc287e9c6ad8f3beb71ac36b98640c9abece268f5a2042b34c4b4216.scope: Deactivated successfully.
Feb  2 04:40:49 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:49 np0005604790 podman[93680]: 2026-02-02 09:40:49.985628358 +0000 UTC m=+0.053179222 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb  2 04:40:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb  2 04:40:50 np0005604790 podman[93680]: 2026-02-02 09:40:50.020783197 +0000 UTC m=+0.088334031 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:40:50 np0005604790 python3[93679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:50 np0005604790 podman[93716]: 2026-02-02 09:40:50.094879433 +0000 UTC m=+0.044445262 container create a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:40:50 np0005604790 systemd[1]: Started libpod-conmon-a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4.scope.
Feb  2 04:40:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:40:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:40:50 np0005604790 podman[93716]: 2026-02-02 09:40:50.072270772 +0000 UTC m=+0.021836591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16076f40e908266b1232af86d568a9fd031da6dcf6bde58633816cd908df352/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16076f40e908266b1232af86d568a9fd031da6dcf6bde58633816cd908df352/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e16076f40e908266b1232af86d568a9fd031da6dcf6bde58633816cd908df352/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:50 np0005604790 podman[93716]: 2026-02-02 09:40:50.207878467 +0000 UTC m=+0.157444336 container init a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:50 np0005604790 podman[93716]: 2026-02-02 09:40:50.216006189 +0000 UTC m=+0.165571999 container start a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:50 np0005604790 podman[93716]: 2026-02-02 09:40:50.220340933 +0000 UTC m=+0.169906752 container attach a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:40:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:50.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:50 np0005604790 podman[93761]: 2026-02-02 09:40:50.27302549 +0000 UTC m=+0.076547382 container exec 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:40:50 np0005604790 podman[93761]: 2026-02-02 09:40:50.285340052 +0000 UTC m=+0.088862014 container exec_died 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:49] ENGINE Bus STARTING
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:49] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:49] ENGINE Client ('192.168.122.100', 55968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:49] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:40:49] ENGINE Bus STARTED
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.djvyfo(active, since 2s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14547 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 04:40:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Feb  2 04:40:51 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Feb  2 04:40:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Feb  2 04:40:51 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 51 pg[12.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:51 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb  2 04:40:52 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:52.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.djvyfo(active, since 4s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb  2 04:40:52 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 52 pg[12.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:40:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:52 np0005604790 systemd[1]: libpod-a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4.scope: Deactivated successfully.
Feb  2 04:40:52 np0005604790 podman[93716]: 2026-02-02 09:40:52.751121325 +0000 UTC m=+2.700687154 container died a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e16076f40e908266b1232af86d568a9fd031da6dcf6bde58633816cd908df352-merged.mount: Deactivated successfully.
Feb  2 04:40:52 np0005604790 podman[93716]: 2026-02-02 09:40:52.790019892 +0000 UTC m=+2.739585681 container remove a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4 (image=quay.io/ceph/ceph:v19, name=goofy_tu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 04:40:52 np0005604790 systemd[1]: libpod-conmon-a095b6fbbca25f52a75cd7733ee57c34f3c60999f0a56e1f52f03bf6e90625e4.scope: Deactivated successfully.
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:52 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb  2 04:40:53 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:53 np0005604790 python3[94860]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:40:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 926fc145-4d1b-48ff-9623-e4e666e20774 (Updating node-exporter deployment (+2 -> 3))
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Feb  2 04:40:53 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Feb  2 04:40:53 np0005604790 python3[95031]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025253.3929274-37463-136296282178584/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=b59eb4ee1ef760db0b0353d13f50139cad503c44 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:40:54 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb  2 04:40:54 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb  2 04:40:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:54.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:40:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:40:54 np0005604790 python3[95081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:54 np0005604790 podman[95082]: 2026-02-02 09:40:54.522257062 +0000 UTC m=+0.061599882 container create 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: Deploying daemon node-exporter.compute-1 on compute-1
Feb  2 04:40:54 np0005604790 systemd[1]: Started libpod-conmon-4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c.scope.
Feb  2 04:40:54 np0005604790 podman[95082]: 2026-02-02 09:40:54.497600247 +0000 UTC m=+0.036943157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:54 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.djvyfo(active, since 7s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:40:54 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe70bddb42835f2462519d423a0b95e74f9575702b462cf68998cb56587352d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe70bddb42835f2462519d423a0b95e74f9575702b462cf68998cb56587352d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:54 np0005604790 podman[95082]: 2026-02-02 09:40:54.630963003 +0000 UTC m=+0.170305853 container init 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:40:54 np0005604790 podman[95082]: 2026-02-02 09:40:54.636922099 +0000 UTC m=+0.176264929 container start 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 04:40:54 np0005604790 podman[95082]: 2026-02-02 09:40:54.641374405 +0000 UTC m=+0.180717245 container attach 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1616834281' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Feb  2 04:40:55 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1616834281' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 04:40:55 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Feb  2 04:40:55 np0005604790 systemd[1]: libpod-4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c.scope: Deactivated successfully.
Feb  2 04:40:55 np0005604790 podman[95082]: 2026-02-02 09:40:55.113139246 +0000 UTC m=+0.652482056 container died 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 04:40:55 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fe70bddb42835f2462519d423a0b95e74f9575702b462cf68998cb56587352d1-merged.mount: Deactivated successfully.
Feb  2 04:40:55 np0005604790 podman[95082]: 2026-02-02 09:40:55.153439859 +0000 UTC m=+0.692782679 container remove 4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c (image=quay.io/ceph/ceph:v19, name=silly_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:40:55 np0005604790 systemd[1]: libpod-conmon-4c9ef46c510bb7536d0015652873bd8bc4e54f1341ee82582c033c4b23d55d6c.scope: Deactivated successfully.
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1616834281' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/1616834281' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb  2 04:40:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Feb  2 04:40:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:40:55 np0005604790 python3[95159]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:55 np0005604790 podman[95161]: 2026-02-02 09:40:55.98093852 +0000 UTC m=+0.047918243 container create 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:40:56 np0005604790 systemd[1]: Started libpod-conmon-1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8.scope.
Feb  2 04:40:56 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:55.956018539 +0000 UTC m=+0.022998312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32878f92081bb7ff926751c22d86e4e6e8dc1f6fc446523b2cf17b4549ec766/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32878f92081bb7ff926751c22d86e4e6e8dc1f6fc446523b2cf17b4549ec766/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:56 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:56.071327423 +0000 UTC m=+0.138307156 container init 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:56.078562872 +0000 UTC m=+0.145542595 container start 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:56.082132325 +0000 UTC m=+0.149112058 container attach 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Feb  2 04:40:56 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Feb  2 04:40:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:56.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:56.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738810055' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb  2 04:40:56 np0005604790 quirky_shirley[95178]: 
Feb  2 04:40:56 np0005604790 quirky_shirley[95178]: {"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":72,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":53,"num_osds":3,"num_up_osds":3,"osd_up_since":1770025207,"num_in_osds":3,"osd_in_since":1770025189,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":197},{"state_name":"unknown","count":1}],"num_pgs":198,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88948736,"bytes_avail":64322977792,"bytes_total":64411926528,"unknown_pgs_ratio":0.0050505050458014011},"fsmap":{"epoch":2,"btime":"2026-02-02T09:40:48:656641+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-02-02T09:40:29.999802+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.djvyfo":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.teascl":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.gzlyac":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14388":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.993726+0000","gid":14388,"addr":"192.168.122.100:0/2805705687","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.vltabo","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}},"24170":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.998087+0000","gid":24170,"addr":"192.168.122.101:0/1861488831","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.ezjvcf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}},"24175":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.999366+0000","gid":24175,"addr":"192.168.122.102:0/1995934692","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.zjyufj","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Feb  2 04:40:56 np0005604790 systemd[1]: libpod-1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8.scope: Deactivated successfully.
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:56.532636991 +0000 UTC m=+0.599616684 container died 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c32878f92081bb7ff926751c22d86e4e6e8dc1f6fc446523b2cf17b4549ec766-merged.mount: Deactivated successfully.
Feb  2 04:40:56 np0005604790 podman[95161]: 2026-02-02 09:40:56.569737321 +0000 UTC m=+0.636717004 container remove 1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8 (image=quay.io/ceph/ceph:v19, name=quirky_shirley, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:40:56 np0005604790 systemd[1]: libpod-conmon-1dcf459a1666d6d6cd7c55476a103308245c06f0bcc1cc38791a10a43649c6d8.scope: Deactivated successfully.
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:56 np0005604790 python3[95239]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:56 np0005604790 podman[95240]: 2026-02-02 09:40:56.945561345 +0000 UTC m=+0.054653720 container create 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:40:56 np0005604790 systemd[1]: Started libpod-conmon-2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771.scope.
Feb  2 04:40:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/319d1ab9bb62b0fd0174706c8751caba1ef5479ad0fa57f1f7b93725a0d2c270/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/319d1ab9bb62b0fd0174706c8751caba1ef5479ad0fa57f1f7b93725a0d2c270/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:57.013638675 +0000 UTC m=+0.122731070 container init 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:57.017590638 +0000 UTC m=+0.126683053 container start 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:56.926941328 +0000 UTC m=+0.036033713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:57.021149751 +0000 UTC m=+0.130242176 container attach 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:57 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Feb  2 04:40:57 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Feb  2 04:40:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 04:40:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097333223' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 04:40:57 np0005604790 angry_albattani[95256]: 
Feb  2 04:40:57 np0005604790 angry_albattani[95256]: {"epoch":3,"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","modified":"2026-02-02T09:39:39.266649Z","created":"2026-02-02T09:37:41.899871Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Feb  2 04:40:57 np0005604790 angry_albattani[95256]: dumped monmap epoch 3
Feb  2 04:40:57 np0005604790 systemd[1]: libpod-2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771.scope: Deactivated successfully.
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:57.457790615 +0000 UTC m=+0.566883000 container died 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:40:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-319d1ab9bb62b0fd0174706c8751caba1ef5479ad0fa57f1f7b93725a0d2c270-merged.mount: Deactivated successfully.
Feb  2 04:40:57 np0005604790 podman[95240]: 2026-02-02 09:40:57.489566355 +0000 UTC m=+0.598658730 container remove 2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771 (image=quay.io/ceph/ceph:v19, name=angry_albattani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:57 np0005604790 systemd[1]: libpod-conmon-2b90f8f17322e61125f02afc0860d1f0311f8d44d4cd539fc45d38f10c343771.scope: Deactivated successfully.
Feb  2 04:40:57 np0005604790 ceph-mon[74489]: Deploying daemon node-exporter.compute-2 on compute-2
Feb  2 04:40:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Feb  2 04:40:58 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Feb  2 04:40:58 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Feb  2 04:40:58 np0005604790 python3[95319]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.162153477 +0000 UTC m=+0.042169824 container create f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 04:40:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:40:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:58 np0005604790 systemd[1]: Started libpod-conmon-f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f.scope.
Feb  2 04:40:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480455d56c529a98607913237ea314af805f561bebb5565784adeece49ec8d5a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480455d56c529a98607913237ea314af805f561bebb5565784adeece49ec8d5a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.237945518 +0000 UTC m=+0.117961875 container init f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.143788607 +0000 UTC m=+0.023804964 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:40:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:40:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:40:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:40:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.242112647 +0000 UTC m=+0.122129024 container start f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.245374552 +0000 UTC m=+0.125390919 container attach f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 926fc145-4d1b-48ff-9623-e4e666e20774 (Updating node-exporter deployment (+2 -> 3))
Feb  2 04:40:58 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 926fc145-4d1b-48ff-9623-e4e666e20774 (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb  2 04:40:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2118971521' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Feb  2 04:40:58 np0005604790 eloquent_proskuriakova[95336]: [client.openstack]
Feb  2 04:40:58 np0005604790 eloquent_proskuriakova[95336]: #011key = AQBGcIBpAAAAABAA2I2uJAQ9+FTGDrMvmIgfmg==
Feb  2 04:40:58 np0005604790 eloquent_proskuriakova[95336]: #011caps mgr = "allow *"
Feb  2 04:40:58 np0005604790 eloquent_proskuriakova[95336]: #011caps mon = "profile rbd"
Feb  2 04:40:58 np0005604790 eloquent_proskuriakova[95336]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb  2 04:40:58 np0005604790 systemd[1]: libpod-f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f.scope: Deactivated successfully.
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.699562463 +0000 UTC m=+0.579578810 container died f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:40:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-480455d56c529a98607913237ea314af805f561bebb5565784adeece49ec8d5a-merged.mount: Deactivated successfully.
Feb  2 04:40:58 np0005604790 podman[95320]: 2026-02-02 09:40:58.74152517 +0000 UTC m=+0.621541507 container remove f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f (image=quay.io/ceph/ceph:v19, name=eloquent_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:40:58 np0005604790 systemd[1]: libpod-conmon-f8016a7a94233c8261b9d9e25f9d9ba366c2163bd1ccbc903f5330b02772b92f.scope: Deactivated successfully.
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.846776971 +0000 UTC m=+0.036223748 container create 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:40:58 np0005604790 systemd[1]: Started libpod-conmon-530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546.scope.
Feb  2 04:40:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.916531125 +0000 UTC m=+0.105977942 container init 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.923807855 +0000 UTC m=+0.113254652 container start 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:58 np0005604790 unruffled_stonebraker[95476]: 167 167
Feb  2 04:40:58 np0005604790 systemd[1]: libpod-530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546.scope: Deactivated successfully.
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.830858955 +0000 UTC m=+0.020305742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.927944373 +0000 UTC m=+0.117391150 container attach 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.929185675 +0000 UTC m=+0.118632482 container died 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:40:58 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:40:58 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:40:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-161a466f7faa3ae714aa65bd4a935aa93988a54db1c504eac139e2c0daeb1109-merged.mount: Deactivated successfully.
Feb  2 04:40:58 np0005604790 podman[95460]: 2026-02-02 09:40:58.968727089 +0000 UTC m=+0.158173876 container remove 530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_stonebraker, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:40:58 np0005604790 systemd[1]: libpod-conmon-530544e7351178a299183f348e714a243db336917f98d85ced961b5f53a4a546.scope: Deactivated successfully.
Feb  2 04:40:59 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts
Feb  2 04:40:59 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.111300566 +0000 UTC m=+0.043364555 container create 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:40:59 np0005604790 systemd[1]: Started libpod-conmon-529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55.scope.
Feb  2 04:40:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:40:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.092823113 +0000 UTC m=+0.024887142 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.206412472 +0000 UTC m=+0.138476561 container init 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.219973316 +0000 UTC m=+0.152037305 container start 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.224349891 +0000 UTC m=+0.156413920 container attach 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:40:59 np0005604790 crazy_matsumoto[95517]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:40:59 np0005604790 crazy_matsumoto[95517]: --> All data devices are unavailable
Feb  2 04:40:59 np0005604790 systemd[1]: libpod-529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55.scope: Deactivated successfully.
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.627100119 +0000 UTC m=+0.559164108 container died 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:40:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Feb  2 04:40:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-39736b93cf186b160cb81214311ee517af212cb4a143debc24f125bb835701cd-merged.mount: Deactivated successfully.
Feb  2 04:40:59 np0005604790 ceph-mon[74489]: from='client.? 192.168.122.100:0/2118971521' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Feb  2 04:40:59 np0005604790 podman[95501]: 2026-02-02 09:40:59.677787044 +0000 UTC m=+0.609851073 container remove 529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_matsumoto, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:40:59 np0005604790 systemd[1]: libpod-conmon-529451ddf87ff7328e38f53461190a25c6a094f14834eccec302f75c21b5bb55.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb  2 04:41:00 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95745]: Invoked with j49647706260 30 /home/zuul/.ansible/tmp/ansible-tmp-1770025259.7281485-37535-200667366077026/AnsiballZ_command.py _
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95786]: Starting module and watcher
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95786]: Start watching 95787 (30)
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95787]: Start module (95787)
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95745]: Return async_wrapper task started.
Feb  2 04:41:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.211969197 +0000 UTC m=+0.041036084 container create 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:41:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:00 np0005604790 systemd[1]: Started libpod-conmon-1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a.scope.
Feb  2 04:41:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.269932442 +0000 UTC m=+0.098999289 container init 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.277176901 +0000 UTC m=+0.106243748 container start 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.280716764 +0000 UTC m=+0.109783621 container attach 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:41:00 np0005604790 competent_yalow[95806]: 167 167
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 conmon[95806]: conmon 1e17e0816b1d474d9235 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a.scope/container/memory.events
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.282601803 +0000 UTC m=+0.111668640 container died 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.193832723 +0000 UTC m=+0.022899590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a18bc2324577b9a185ab07e2b8c615da7c71872ca58d45da79b59df5f665ee56-merged.mount: Deactivated successfully.
Feb  2 04:41:00 np0005604790 python3[95789]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:00 np0005604790 podman[95790]: 2026-02-02 09:41:00.315049321 +0000 UTC m=+0.144116178 container remove 1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_yalow, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-conmon-1e17e0816b1d474d92358469f0a9e6cc58c7b3c7fe21e3013a2bdf2f1fd3741a.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.364614077 +0000 UTC m=+0.045475450 container create 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:41:00 np0005604790 systemd[1]: Started libpod-conmon-8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804.scope.
Feb  2 04:41:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87727f393d976c9ecc91336fbabb89eb0ad0ce5fcb5dc3257d532cbaa985025e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87727f393d976c9ecc91336fbabb89eb0ad0ce5fcb5dc3257d532cbaa985025e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.423982059 +0000 UTC m=+0.104843472 container init 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.430611052 +0000 UTC m=+0.111472435 container start 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.435074219 +0000 UTC m=+0.115935632 container attach 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.34408117 +0000 UTC m=+0.024942603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.474615522 +0000 UTC m=+0.057842893 container create f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:41:00 np0005604790 systemd[1]: Started libpod-conmon-f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4.scope.
Feb  2 04:41:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1a0bd78e1534705a1c8cd853d35c3bdb878424638f6e77ba4135d4f307276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1a0bd78e1534705a1c8cd853d35c3bdb878424638f6e77ba4135d4f307276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1a0bd78e1534705a1c8cd853d35c3bdb878424638f6e77ba4135d4f307276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1a0bd78e1534705a1c8cd853d35c3bdb878424638f6e77ba4135d4f307276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.44622473 +0000 UTC m=+0.029452171 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.548052602 +0000 UTC m=+0.131279983 container init f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.558752422 +0000 UTC m=+0.141979783 container start f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.563209298 +0000 UTC m=+0.146436689 container attach f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:41:00 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14583 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:41:00 np0005604790 clever_feistel[95841]: 
Feb  2 04:41:00 np0005604790 clever_feistel[95841]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.796833445 +0000 UTC m=+0.477694848 container died 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]: {
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:    "1": [
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:        {
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "devices": [
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "/dev/loop3"
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            ],
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "lv_name": "ceph_lv0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "lv_size": "21470642176",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "name": "ceph_lv0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "tags": {
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.cluster_name": "ceph",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.crush_device_class": "",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.encrypted": "0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.osd_id": "1",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.type": "block",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.vdo": "0",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:                "ceph.with_tpm": "0"
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            },
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "type": "block",
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:            "vg_name": "ceph_vg0"
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:        }
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]:    ]
Feb  2 04:41:00 np0005604790 agitated_volhard[95865]: }
Feb  2 04:41:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-87727f393d976c9ecc91336fbabb89eb0ad0ce5fcb5dc3257d532cbaa985025e-merged.mount: Deactivated successfully.
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 podman[95818]: 2026-02-02 09:41:00.880547283 +0000 UTC m=+0.561408666 container remove 8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804 (image=quay.io/ceph/ceph:v19, name=clever_feistel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.883240374 +0000 UTC m=+0.466467735 container died f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:00 np0005604790 ansible-async_wrapper.py[95787]: Module complete (95787)
Feb  2 04:41:00 np0005604790 podman[95847]: 2026-02-02 09:41:00.976659366 +0000 UTC m=+0.559886767 container remove f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-conmon-f82df2f48867f2e251850392a65d7d7eb06e6786a07ee221d93477a90781d8c4.scope: Deactivated successfully.
Feb  2 04:41:00 np0005604790 systemd[1]: libpod-conmon-8331bd7573d32dc7635127b7f75a0d9d7a8f026109a732b17847f40a2a611804.scope: Deactivated successfully.
Feb  2 04:41:01 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Feb  2 04:41:01 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Feb  2 04:41:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay-90a1a0bd78e1534705a1c8cd853d35c3bdb878424638f6e77ba4135d4f307276-merged.mount: Deactivated successfully.
Feb  2 04:41:01 np0005604790 python3[96020]: ansible-ansible.legacy.async_status Invoked with jid=j49647706260.95745 mode=status _async_dir=/root/.ansible_async
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.478183195 +0000 UTC m=+0.041882456 container create da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:41:01 np0005604790 systemd[1]: Started libpod-conmon-da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a.scope.
Feb  2 04:41:01 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.459670771 +0000 UTC m=+0.023370002 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.562313594 +0000 UTC m=+0.126012865 container init da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.570029976 +0000 UTC m=+0.133729197 container start da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:41:01 np0005604790 blissful_easley[96098]: 167 167
Feb  2 04:41:01 np0005604790 systemd[1]: libpod-da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a.scope: Deactivated successfully.
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.581832664 +0000 UTC m=+0.145531915 container attach da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.582092141 +0000 UTC m=+0.145791362 container died da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:41:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8bb871dc21160cf7050763877f10d1ab1b02baf7cef9bce2495bef45e3af2929-merged.mount: Deactivated successfully.
Feb  2 04:41:01 np0005604790 podman[96058]: 2026-02-02 09:41:01.625879086 +0000 UTC m=+0.189578357 container remove da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_easley, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:01 np0005604790 systemd[1]: libpod-conmon-da5c9333194d06af8365fd4ea6e6ab80e860ec07a410d774113baeb7cc0eb20a.scope: Deactivated successfully.
Feb  2 04:41:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Feb  2 04:41:01 np0005604790 python3[96129]: ansible-ansible.legacy.async_status Invoked with jid=j49647706260.95745 mode=cleanup _async_dir=/root/.ansible_async
Feb  2 04:41:01 np0005604790 podman[96150]: 2026-02-02 09:41:01.779003428 +0000 UTC m=+0.039553255 container create 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:41:01 np0005604790 systemd[1]: Started libpod-conmon-9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646.scope.
Feb  2 04:41:01 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3157b3ceab7566ed0b9f701dd2760203483682dcf429b7a56621fba3ce97251e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3157b3ceab7566ed0b9f701dd2760203483682dcf429b7a56621fba3ce97251e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3157b3ceab7566ed0b9f701dd2760203483682dcf429b7a56621fba3ce97251e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3157b3ceab7566ed0b9f701dd2760203483682dcf429b7a56621fba3ce97251e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:01 np0005604790 podman[96150]: 2026-02-02 09:41:01.763930474 +0000 UTC m=+0.024480321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:01 np0005604790 podman[96150]: 2026-02-02 09:41:01.870420348 +0000 UTC m=+0.130970205 container init 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:41:01 np0005604790 podman[96150]: 2026-02-02 09:41:01.877996396 +0000 UTC m=+0.138546223 container start 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Feb  2 04:41:01 np0005604790 podman[96150]: 2026-02-02 09:41:01.881442946 +0000 UTC m=+0.141992773 container attach 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:02 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb  2 04:41:02 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb  2 04:41:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000052s ======
Feb  2 04:41:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Feb  2 04:41:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:02 np0005604790 python3[96213]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.384118865 +0000 UTC m=+0.046816025 container create c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:02 np0005604790 systemd[1]: Started libpod-conmon-c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664.scope.
Feb  2 04:41:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead8c44b7d257c10c15bccb57f72ce34a83d2cc3f0570f975a76e324951bcf28/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:02 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead8c44b7d257c10c15bccb57f72ce34a83d2cc3f0570f975a76e324951bcf28/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.45585279 +0000 UTC m=+0.118549970 container init c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.360443506 +0000 UTC m=+0.023140696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.461874457 +0000 UTC m=+0.124571647 container start c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.465115402 +0000 UTC m=+0.127812592 container attach c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:02 np0005604790 lvm[96286]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:41:02 np0005604790 lvm[96286]: VG ceph_vg0 finished
Feb  2 04:41:02 np0005604790 condescending_blackburn[96166]: {}
Feb  2 04:41:02 np0005604790 systemd[1]: libpod-9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646.scope: Deactivated successfully.
Feb  2 04:41:02 np0005604790 systemd[1]: libpod-9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646.scope: Consumed 1.050s CPU time.
Feb  2 04:41:02 np0005604790 podman[96308]: 2026-02-02 09:41:02.659636637 +0000 UTC m=+0.028536397 container died 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:41:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3157b3ceab7566ed0b9f701dd2760203483682dcf429b7a56621fba3ce97251e-merged.mount: Deactivated successfully.
Feb  2 04:41:02 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 13 completed events
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:02 np0005604790 podman[96308]: 2026-02-02 09:41:02.802672475 +0000 UTC m=+0.171572176 container remove 9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 04:41:02 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14589 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:41:02 np0005604790 systemd[1]: libpod-conmon-9e0c20269fb091e051122d74382a61cb77464c24436680a508b15bb4ef81b646.scope: Deactivated successfully.
Feb  2 04:41:02 np0005604790 vigilant_goodall[96276]: 
Feb  2 04:41:02 np0005604790 vigilant_goodall[96276]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb  2 04:41:02 np0005604790 systemd[1]: libpod-c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664.scope: Deactivated successfully.
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.82619754 +0000 UTC m=+0.488894770 container died c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ead8c44b7d257c10c15bccb57f72ce34a83d2cc3f0570f975a76e324951bcf28-merged.mount: Deactivated successfully.
Feb  2 04:41:02 np0005604790 podman[96239]: 2026-02-02 09:41:02.947966243 +0000 UTC m=+0.610663443 container remove c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664 (image=quay.io/ceph/ceph:v19, name=vigilant_goodall, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb  2 04:41:02 np0005604790 systemd[1]: libpod-conmon-c5a4da9b9c3a3a40d5daede57d0224b4f8df5b5c930bc311d0e4255f6f60b664.scope: Deactivated successfully.
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:02 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 06cff2dd-9b73-4b4e-9079-8a1650fddfcc (Updating mds.cephfs deployment (+3 -> 3))
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vvohrf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  2 04:41:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vvohrf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vvohrf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:03 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.vvohrf on compute-2
Feb  2 04:41:03 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.vvohrf on compute-2
Feb  2 04:41:03 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb  2 04:41:03 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb  2 04:41:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Feb  2 04:41:03 np0005604790 python3[96361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:03 np0005604790 podman[96362]: 2026-02-02 09:41:03.940450027 +0000 UTC m=+0.074302383 container create d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vvohrf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vvohrf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:03 np0005604790 ceph-mon[74489]: Deploying daemon mds.cephfs.compute-2.vvohrf on compute-2
Feb  2 04:41:03 np0005604790 systemd[1]: Started libpod-conmon-d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d.scope.
Feb  2 04:41:03 np0005604790 podman[96362]: 2026-02-02 09:41:03.906728345 +0000 UTC m=+0.040580781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e070ec7bf303866c02c3dc2b2552d3b714e7d6946a0b2a592884147fb2b6f5f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e070ec7bf303866c02c3dc2b2552d3b714e7d6946a0b2a592884147fb2b6f5f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:04 np0005604790 podman[96362]: 2026-02-02 09:41:04.049437486 +0000 UTC m=+0.183289872 container init d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:04 np0005604790 podman[96362]: 2026-02-02 09:41:04.05458741 +0000 UTC m=+0.188439776 container start d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:04 np0005604790 podman[96362]: 2026-02-02 09:41:04.065336461 +0000 UTC m=+0.199188907 container attach d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 04:41:04 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb  2 04:41:04 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb  2 04:41:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:04.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:04 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14595 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:41:04 np0005604790 stoic_shaw[96377]: 
Feb  2 04:41:04 np0005604790 stoic_shaw[96377]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Feb  2 04:41:04 np0005604790 systemd[1]: libpod-d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d.scope: Deactivated successfully.
Feb  2 04:41:04 np0005604790 podman[96362]: 2026-02-02 09:41:04.511896594 +0000 UTC m=+0.645748990 container died d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e070ec7bf303866c02c3dc2b2552d3b714e7d6946a0b2a592884147fb2b6f5f1-merged.mount: Deactivated successfully.
Feb  2 04:41:04 np0005604790 podman[96362]: 2026-02-02 09:41:04.55764885 +0000 UTC m=+0.691501246 container remove d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d (image=quay.io/ceph/ceph:v19, name=stoic_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:41:04 np0005604790 systemd[1]: libpod-conmon-d2cfbf2de6ae90a63bc669fb427f810b250161b0ccc9400d6ea232f458468b1d.scope: Deactivated successfully.
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.clmmzw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.clmmzw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.clmmzw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:04 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.clmmzw on compute-0
Feb  2 04:41:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.clmmzw on compute-0
Feb  2 04:41:05 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.clmmzw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.clmmzw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:05 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e3 new map
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-02T09:41:05:061446+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:40:48.656583+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.vvohrf{-1:24310} state up:standby seq 1 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] up:boot
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] as mds.0
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vvohrf assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vvohrf"} v 0)
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vvohrf"}]: dispatch
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e3 all = 0
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e4 new map
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-02T09:41:05:094248+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:05.094239+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.vvohrf{0:24310} state up:creating seq 1 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:creating}
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.vvohrf is now active in filesystem cephfs as rank 0
Feb  2 04:41:05 np0005604790 ansible-async_wrapper.py[95786]: Done in kid B.
Feb  2 04:41:05 np0005604790 podman[96506]: 2026-02-02 09:41:05.337993838 +0000 UTC m=+0.049229378 container create 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:41:05 np0005604790 systemd[1]: Started libpod-conmon-6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5.scope.
Feb  2 04:41:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:05 np0005604790 podman[96506]: 2026-02-02 09:41:05.322645676 +0000 UTC m=+0.033881236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:05 np0005604790 podman[96506]: 2026-02-02 09:41:05.416424008 +0000 UTC m=+0.127659588 container init 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:05 np0005604790 podman[96506]: 2026-02-02 09:41:05.424073798 +0000 UTC m=+0.135309378 container start 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:41:05 np0005604790 podman[96506]: 2026-02-02 09:41:05.427831726 +0000 UTC m=+0.139067276 container attach 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:41:05 np0005604790 dazzling_hellman[96548]: 167 167
Feb  2 04:41:05 np0005604790 systemd[1]: libpod-6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5.scope: Deactivated successfully.
Feb  2 04:41:05 np0005604790 conmon[96548]: conmon 6cd7a48e0b73845b5167 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5.scope/container/memory.events
Feb  2 04:41:05 np0005604790 python3[96545]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:05 np0005604790 podman[96553]: 2026-02-02 09:41:05.47120163 +0000 UTC m=+0.027243004 container died 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0c801bffcfa9558785e967bb1e07e632dfff98da584a37c22cd20fa0d2531c5f-merged.mount: Deactivated successfully.
Feb  2 04:41:05 np0005604790 podman[96553]: 2026-02-02 09:41:05.504674154 +0000 UTC m=+0.060715478 container remove 6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_hellman, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:05 np0005604790 systemd[1]: libpod-conmon-6cd7a48e0b73845b516791754d43d8669b87343f117d1f4eba2cc16372be4cc5.scope: Deactivated successfully.
Feb  2 04:41:05 np0005604790 podman[96566]: 2026-02-02 09:41:05.530283534 +0000 UTC m=+0.052142784 container create e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:05 np0005604790 systemd[1]: Started libpod-conmon-e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d.scope.
Feb  2 04:41:05 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:05 np0005604790 podman[96566]: 2026-02-02 09:41:05.506086151 +0000 UTC m=+0.027945391 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:41:05 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:05 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35bd0ea5a9d3e04a1960dd0c683822b58b3030ee67b74bfc03bc44dc67bc548/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35bd0ea5a9d3e04a1960dd0c683822b58b3030ee67b74bfc03bc44dc67bc548/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:05 np0005604790 podman[96566]: 2026-02-02 09:41:05.847829693 +0000 UTC m=+0.369688973 container init e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:41:05 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:05 np0005604790 podman[96566]: 2026-02-02 09:41:05.857950238 +0000 UTC m=+0.379809448 container start e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:41:05 np0005604790 podman[96566]: 2026-02-02 09:41:05.860947766 +0000 UTC m=+0.382807006 container attach e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:05 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:05 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:06 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb  2 04:41:06 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb  2 04:41:06 np0005604790 systemd[1]: Starting Ceph mds.cephfs.compute-0.clmmzw for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: Deploying daemon mds.cephfs.compute-0.clmmzw on compute-0
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: daemon mds.cephfs.compute-2.vvohrf assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: Cluster is now healthy
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: daemon mds.cephfs.compute-2.vvohrf is now active in filesystem cephfs as rank 0
Feb  2 04:41:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:06 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.14601 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb  2 04:41:06 np0005604790 lucid_lederberg[96585]: 
Feb  2 04:41:06 np0005604790 lucid_lederberg[96585]: [{"container_id": "318ef38b81ca", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.12%", "created": "2026-02-02T09:38:22.080661Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T09:40:50.366237Z", "memory_usage": 7790919, "ports": [], "service_name": "crash", "started": "2026-02-02T09:38:21.961823Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@crash.compute-0", "version": "19.2.3"}, {"container_id": "01cf0f34952f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.48%", "created": "2026-02-02T09:38:55.051810Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T09:40:49.641392Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2026-02-02T09:38:54.944016Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@crash.compute-1", "version": "19.2.3"}, {"container_id": "cee34f0bc1ca", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.24%", "created": "2026-02-02T09:39:47.647047Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T09:40:49.857037Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2026-02-02T09:39:47.546920Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@crash.compute-2", "version": "19.2.3"}, {"container_id": "19feecaa7fcd", "container_image_digests": ["quay.io/ceph/haproxy@sha256:5479ac79e01ff403396e22ccf0e9e3352ab4518e5164105c2aa1879c5ee2f0b5"], "container_image_id": "e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914", "container_image_name": "quay.io/ceph/haproxy:2.3", "cpu_percentage": "0.22%", "created": "2026-02-02T09:40:22.197863Z", "daemon_id": "rgw.default.compute-0.avekxu", "daemon_name": "haproxy.rgw.default.compute-0.avekxu", "daemon_type": "haproxy", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T09:40:50.366751Z", "memory_usage": 5093982, "ports": [8080, 8999], "service_name": "ingress.rgw.default", "started": "2026-02-02T09:40:22.027515Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@haproxy.rgw.default.compute-0.avekxu", "version": "2.3.17-d1c9119"}, {"container_id": "a7814d18bb45", "container_image_digests": ["quay.io/ceph/haproxy@sha256:5479ac79e01ff403396e22ccf0e9e3352ab4518e5164105c2aa1879c5ee2f0b5"], "container_image_id": "e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914", "container_image_name": "quay.io/ceph/haproxy:2.3", "cpu_percentage": "0.20%", "created": "2026-02-02T09:40:26.118886Z", "daemon_id": "rgw.default.compute-2.txhwfs", "daemon_name": "haproxy.rgw.default.compute-2.txhwfs", "daemon_type": "haproxy", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T09:40:49.857257Z", "memory_usage": 5020581, "ports": [8080, 8999], "service_name": "ingress.rgw.default", "started": "2026-02-02T09:40:26.010154Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@haproxy.rgw.default.compute-2.txhwfs", "version": "2.3.17-d1c9119"}, {"daemon_id": "cephfs.compute-2.vvohrf", "daemon_name": "mds.cephfs.compute-2.vvohrf", "daemon_type": "mds", "events": ["2026-02-02T09:41:04.815464Z daemon:mds.cephfs.compute-2.vvohrf [INFO] \"Deployed mds.cephfs.compute-2.vvohrf on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "3dfd19b9ab30", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.75%", "created": "2026-02-02T09:37:47.513743Z", "daemon_id": "compute-0.djvyfo", "daemon_name": "mgr.compute-0.djvyfo", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T09:40:50.366070Z", "memory_usage": 542008934, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-02T09:37:47.378678Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@mgr.compute-0.djvyfo", "version": "19.2.3"}, {"container_id": "0fc1762cd853", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.11%", "created": "2026-02-02T09:39:45.931197Z", "daemon_id": "compute-1.teascl", "daemon_name": "mgr.compute-1.teascl", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-02-02T09:40:49.641847Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2026-02-02T09:39:45.833833Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@mgr.compute-1.teascl", "version": "19.2.3"}, {"container_id": "61859de5ac0e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "33.67%", "created": "2026-02-02T09:39:40.475890Z", "daemon_id": "compute-2.gzlyac", "daemon_name": "mgr.compute-2.gzlyac", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-02-02T09:40:49.856960Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2026-02-02T09:39:40.368857Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@mgr.compute-2.gzlyac", "version": "19.2.3"}, {"container_id": "79ef7165b184", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "3.25%", "created": "2026-02-02T09:37:43.811645Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-02T09:40:50.365821Z", "memory_request": 2147483648, "me
Feb  2 04:41:06 np0005604790 lucid_lederberg[96585]: ion": "19.2.3"}, {"container_id": "39e1607e9fe8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.79%", "created": "2026-02-02T09:40:14.659636Z", "daemon_id": "rgw.compute-2.zjyufj", "daemon_name": "rgw.rgw.compute-2.zjyufj", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2026-02-02T09:40:49.857184Z", "memory_usage": 104490598, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-02-02T09:40:14.554218Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@rgw.rgw.compute-2.zjyufj", "version": "19.2.3"}]
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e5 new map
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-02-02T09:41:06:101701+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:06.101697+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 2 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] up:active
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active}
Feb  2 04:41:06 np0005604790 systemd[1]: libpod-e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d.scope: Deactivated successfully.
Feb  2 04:41:06 np0005604790 podman[96566]: 2026-02-02 09:41:06.211186681 +0000 UTC m=+0.733045891 container died e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e35bd0ea5a9d3e04a1960dd0c683822b58b3030ee67b74bfc03bc44dc67bc548-merged.mount: Deactivated successfully.
Feb  2 04:41:06 np0005604790 podman[96566]: 2026-02-02 09:41:06.248838166 +0000 UTC m=+0.770697376 container remove e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d (image=quay.io/ceph/ceph:v19, name=lucid_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:06.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:06 np0005604790 systemd[1]: libpod-conmon-e75c77b4eb5bf8b0cb2f17edd6a3f8e72d943068947ab3a05433b73760fbbb8d.scope: Deactivated successfully.
Feb  2 04:41:06 np0005604790 podman[96731]: 2026-02-02 09:41:06.265616574 +0000 UTC m=+0.060266426 container create 7adc20511c464cc8d1a3fc124e35d564818f3952b55827abfa6fd9e805055a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mds-cephfs-compute-0-clmmzw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21df4144edb6a271217780204dedf7e33883c6f5dc21ec681521a0ea27488e53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21df4144edb6a271217780204dedf7e33883c6f5dc21ec681521a0ea27488e53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21df4144edb6a271217780204dedf7e33883c6f5dc21ec681521a0ea27488e53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21df4144edb6a271217780204dedf7e33883c6f5dc21ec681521a0ea27488e53/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.clmmzw supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:06 np0005604790 podman[96731]: 2026-02-02 09:41:06.325455578 +0000 UTC m=+0.120105420 container init 7adc20511c464cc8d1a3fc124e35d564818f3952b55827abfa6fd9e805055a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mds-cephfs-compute-0-clmmzw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 04:41:06 np0005604790 podman[96731]: 2026-02-02 09:41:06.333969301 +0000 UTC m=+0.128619123 container start 7adc20511c464cc8d1a3fc124e35d564818f3952b55827abfa6fd9e805055a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mds-cephfs-compute-0-clmmzw, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:06 np0005604790 podman[96731]: 2026-02-02 09:41:06.239343307 +0000 UTC m=+0.033993139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:06 np0005604790 bash[96731]: 7adc20511c464cc8d1a3fc124e35d564818f3952b55827abfa6fd9e805055a10
Feb  2 04:41:06 np0005604790 systemd[1]: Started Ceph mds.cephfs.compute-0.clmmzw for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:06 np0005604790 ceph-mds[96761]: set uid:gid to 167:167 (ceph:ceph)
Feb  2 04:41:06 np0005604790 ceph-mds[96761]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Feb  2 04:41:06 np0005604790 ceph-mds[96761]: main not setting numa affinity
Feb  2 04:41:06 np0005604790 ceph-mds[96761]: pidfile_write: ignore empty --pid-file
Feb  2 04:41:06 np0005604790 rsyslogd[1005]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "318ef38b81ca", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb  2 04:41:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mds-cephfs-compute-0-clmmzw[96757]: starting mds.cephfs.compute-0.clmmzw at 
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:06 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Updating MDS map to version 5 from mon.0
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.khfsen", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.khfsen", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.khfsen", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:06 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.khfsen on compute-1
Feb  2 04:41:06 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.khfsen on compute-1
Feb  2 04:41:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb  2 04:41:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.khfsen", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.khfsen", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: Deploying daemon mds.cephfs.compute-1.khfsen on compute-1
Feb  2 04:41:07 np0005604790 python3[96805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e6 new map
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2026-02-02T09:41:07:200268+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:06.101697+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 2 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:07 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Updating MDS map to version 6 from mon.0
Feb  2 04:41:07 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Monitors have assigned me to become a standby
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] up:boot
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 1 up:standby
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.clmmzw"} v 0)
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.clmmzw"}]: dispatch
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e6 all = 0
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e7 new map
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2026-02-02T09:41:07:213917+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:06.101697+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 2 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 1 up:standby
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.274999309 +0000 UTC m=+0.063364247 container create 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 04:41:07 np0005604790 systemd[1]: Started libpod-conmon-5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392.scope.
Feb  2 04:41:07 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5dd759261392a386b12f04457a23525dbb2d383bb18197de435c759ea2a4a4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5dd759261392a386b12f04457a23525dbb2d383bb18197de435c759ea2a4a4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.249575725 +0000 UTC m=+0.037940683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.351210101 +0000 UTC m=+0.139575039 container init 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.357602389 +0000 UTC m=+0.145967307 container start 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.361714846 +0000 UTC m=+0.150079804 container attach 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598454046' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Feb  2 04:41:07 np0005604790 fervent_jennings[96822]: 
Feb  2 04:41:07 np0005604790 fervent_jennings[96822]: {"fsid":"d241d473-9fcb-5f74-b163-f1ca4454e7f1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":83,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":53,"num_osds":3,"num_up_osds":3,"osd_up_since":1770025207,"num_in_osds":3,"osd_in_since":1770025189,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":89083904,"bytes_avail":64322842624,"bytes_total":64411926528},"fsmap":{"epoch":7,"btime":"2026-02-02T09:41:07:213917+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.vvohrf","status":"up:active","gid":24310}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-02-02T09:40:29.999802+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.djvyfo":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.teascl":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.gzlyac":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14388":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.993726+0000","gid":14388,"addr":"192.168.122.100:0/2805705687","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.vltabo","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}},"24170":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.998087+0000","gid":24170,"addr":"192.168.122.101:0/1861488831","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.ezjvcf","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864292","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}},"24175":{"start_epoch":5,"start_stamp":"2026-02-02T09:40:29.999366+0000","gid":24175,"addr":"192.168.122.102:0/1995934692","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.zjyufj","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026","kernel_version":"5.14.0-665.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"d5604b0e-c827-4596-94de-7709c44354e7","zone_name":"default","zonegroup_id":"d74d963d-58da-4c60-ad13-18a6b0033c09","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"06cff2dd-9b73-4b4e-9079-8a1650fddfcc":{"message":"Updating mds.cephfs deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Feb  2 04:41:07 np0005604790 systemd[1]: libpod-5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392.scope: Deactivated successfully.
Feb  2 04:41:07 np0005604790 conmon[96822]: conmon 5368d6a80fc1741369cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392.scope/container/memory.events
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.780461642 +0000 UTC m=+0.568826550 container died 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:41:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a5dd759261392a386b12f04457a23525dbb2d383bb18197de435c759ea2a4a4c-merged.mount: Deactivated successfully.
Feb  2 04:41:07 np0005604790 podman[96806]: 2026-02-02 09:41:07.819716898 +0000 UTC m=+0.608081816 container remove 5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392 (image=quay.io/ceph/ceph:v19, name=fervent_jennings, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 04:41:07 np0005604790 systemd[1]: libpod-conmon-5368d6a80fc1741369cc7b387d74d7b771cda9c1723bc2cac0a36172d0000392.scope: Deactivated successfully.
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:41:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 06cff2dd-9b73-4b4e-9079-8a1650fddfcc (Updating mds.cephfs deployment (+3 -> 3))
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 06cff2dd-9b73-4b4e-9079-8a1650fddfcc (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb  2 04:41:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev a0cd3058-d53f-4222-b614-b57dab5bca67 (Updating nfs.cephfs deployment (+3 -> 3))
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e8 new map
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2026-02-02T09:41:08:229569+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:06.101697+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 2 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.khfsen{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] up:boot
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 2 up:standby
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.khfsen"} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.khfsen"}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e8 all = 0
Feb  2 04:41:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:08.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.mhzhsx's ganesha conf is defaulting to empty
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.mhzhsx's ganesha conf is defaulting to empty
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1
Feb  2 04:41:08 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1
Feb  2 04:41:08 np0005604790 python3[96920]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:08 np0005604790 podman[96921]: 2026-02-02 09:41:08.781670403 +0000 UTC m=+0.058196682 container create 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:41:08 np0005604790 systemd[1]: Started libpod-conmon-94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74.scope.
Feb  2 04:41:08 np0005604790 podman[96921]: 2026-02-02 09:41:08.754436191 +0000 UTC m=+0.030962540 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5571eda2e5b63da7617d229376b792fcb9e8145eafe9ddd870dee565c056734/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5571eda2e5b63da7617d229376b792fcb9e8145eafe9ddd870dee565c056734/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:08 np0005604790 podman[96921]: 2026-02-02 09:41:08.869846338 +0000 UTC m=+0.146372677 container init 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:41:08 np0005604790 podman[96921]: 2026-02-02 09:41:08.878516864 +0000 UTC m=+0.155043123 container start 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:41:08 np0005604790 podman[96921]: 2026-02-02 09:41:08.882994281 +0000 UTC m=+0.159520670 container attach 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Feb  2 04:41:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364289223' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Feb  2 04:41:09 np0005604790 adoring_bell[96936]: 
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mhzhsx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: Bind address in nfs.cephfs.0.0.compute-1.mhzhsx's ganesha conf is defaulting to empty
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: Deploying daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1
Feb  2 04:41:09 np0005604790 systemd[1]: libpod-94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74.scope: Deactivated successfully.
Feb  2 04:41:09 np0005604790 conmon[96936]: conmon 94371d816a4a9348e1c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74.scope/container/memory.events
Feb  2 04:41:09 np0005604790 adoring_bell[96936]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.djvyfo/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.teascl/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.gzlyac/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.vltabo","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.ezjvcf","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.zjyufj","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb  2 04:41:09 np0005604790 podman[96921]: 2026-02-02 09:41:09.271081485 +0000 UTC m=+0.547607824 container died 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 04:41:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a5571eda2e5b63da7617d229376b792fcb9e8145eafe9ddd870dee565c056734-merged.mount: Deactivated successfully.
Feb  2 04:41:09 np0005604790 podman[96921]: 2026-02-02 09:41:09.308055031 +0000 UTC m=+0.584581300 container remove 94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74 (image=quay.io/ceph/ceph:v19, name=adoring_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb  2 04:41:09 np0005604790 systemd[1]: libpod-conmon-94371d816a4a9348e1c63dbd1759105d1e97174c5417390b3a20e9dad5b0aa74.scope: Deactivated successfully.
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e9 new map
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2026-02-02T09:41:09:317331+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:09.133293+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.khfsen{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] up:active
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 2 up:standby
Feb  2 04:41:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:09 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa
Feb  2 04:41:09 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb  2 04:41:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:10 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb  2 04:41:10 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Feb  2 04:41:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Feb  2 04:41:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:10.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:10 np0005604790 python3[97013]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:10.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.29540263 +0000 UTC m=+0.052248876 container create c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:10 np0005604790 systemd[1]: Started libpod-conmon-c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e.scope.
Feb  2 04:41:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce483239ea4d27de9b4f54a43613f8f16478247835aeb5209f9a46833c9a6b14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce483239ea4d27de9b4f54a43613f8f16478247835aeb5209f9a46833c9a6b14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.27892559 +0000 UTC m=+0.035771856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.390133637 +0000 UTC m=+0.146979903 container init c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.39714918 +0000 UTC m=+0.153995456 container start c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.401116954 +0000 UTC m=+0.157963210 container attach c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789053282' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Feb  2 04:41:10 np0005604790 hardcore_zhukovsky[97029]: mimic
Feb  2 04:41:10 np0005604790 systemd[1]: libpod-c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e.scope: Deactivated successfully.
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.765129919 +0000 UTC m=+0.521976195 container died c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 04:41:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ce483239ea4d27de9b4f54a43613f8f16478247835aeb5209f9a46833c9a6b14-merged.mount: Deactivated successfully.
Feb  2 04:41:10 np0005604790 podman[97014]: 2026-02-02 09:41:10.809868458 +0000 UTC m=+0.566714744 container remove c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e (image=quay.io/ceph/ceph:v19, name=hardcore_zhukovsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:41:10 np0005604790 systemd[1]: libpod-conmon-c385001fbc38b624f88e0aca29a8de1114bfe7ed80b573ab810e5e383098fb4e.scope: Deactivated successfully.
Feb  2 04:41:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb  2 04:41:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb  2 04:41:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e10 new map
Feb  2 04:41:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2026-02-02T09:41:11:335086+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:09.133293+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.khfsen{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:11 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Updating MDS map to version 10 from mon.0
Feb  2 04:41:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] up:standby
Feb  2 04:41:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 2 up:standby
Feb  2 04:41:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:41:11 np0005604790 python3[97092]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:11 np0005604790 podman[97093]: 2026-02-02 09:41:11.769963885 +0000 UTC m=+0.035763886 container create 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:41:11 np0005604790 systemd[1]: Started libpod-conmon-8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a.scope.
Feb  2 04:41:11 np0005604790 podman[97093]: 2026-02-02 09:41:11.7533338 +0000 UTC m=+0.019133771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ba02f24a051564e73ec3ef79bf9aeadf71b4acbf3e65630007d347da6c91c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ba02f24a051564e73ec3ef79bf9aeadf71b4acbf3e65630007d347da6c91c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:11 np0005604790 podman[97093]: 2026-02-02 09:41:11.87461779 +0000 UTC m=+0.140417771 container init 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:41:11 np0005604790 podman[97093]: 2026-02-02 09:41:11.906183675 +0000 UTC m=+0.171983656 container start 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:41:11 np0005604790 podman[97093]: 2026-02-02 09:41:11.921133466 +0000 UTC m=+0.186933457 container attach 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 04:41:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb  2 04:41:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb  2 04:41:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:12.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:12.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222980362' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Feb  2 04:41:12 np0005604790 priceless_rubin[97109]: 
Feb  2 04:41:12 np0005604790 priceless_rubin[97109]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Feb  2 04:41:12 np0005604790 systemd[1]: libpod-8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a.scope: Deactivated successfully.
Feb  2 04:41:12 np0005604790 podman[97093]: 2026-02-02 09:41:12.352299557 +0000 UTC m=+0.618099528 container died 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c8ba02f24a051564e73ec3ef79bf9aeadf71b4acbf3e65630007d347da6c91c8-merged.mount: Deactivated successfully.
Feb  2 04:41:12 np0005604790 podman[97093]: 2026-02-02 09:41:12.422951413 +0000 UTC m=+0.688751424 container remove 8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a (image=quay.io/ceph/ceph:v19, name=priceless_rubin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:41:12 np0005604790 systemd[1]: libpod-conmon-8b087dd4675868df9bb2e2471a4c4a99cd1f589762b742c3285e5e14c3c1fb4a.scope: Deactivated successfully.
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 new map
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2026-02-02T09:41:12:473139+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-02T09:40:48.656583+0000#012modified#0112026-02-02T09:41:09.133293+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24310}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24310 members: 24310#012[mds.cephfs.compute-2.vvohrf{0:24310} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/673721799,v1:192.168.122.102:6805/673721799] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.clmmzw{-1:14607} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/4233871501,v1:192.168.122.100:6807/4233871501] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.khfsen{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] compat {c=[1],r=[1],i=[1fff]}]
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/685771812,v1:192.168.122.101:6805/685771812] up:standby
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.vvohrf=up:active} 2 up:standby
Feb  2 04:41:12 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 14 completed events
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb  2 04:41:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa-rgw
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa-rgw
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.dciyfa's ganesha conf is defaulting to empty
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.dciyfa's ganesha conf is defaulting to empty
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:13 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dciyfa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:41:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb  2 04:41:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb  2 04:41:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:14.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.1.0.compute-2.dciyfa-rgw
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: Bind address in nfs.cephfs.1.0.compute-2.dciyfa's ganesha conf is defaulting to empty
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: Deploying daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:14 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab
Feb  2 04:41:14 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:14 np0005604790 ceph-mgr[74785]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb  2 04:41:14 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb  2 04:41:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab-rgw
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab-rgw
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.fdwwab's ganesha conf is defaulting to empty
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.fdwwab's ganesha conf is defaulting to empty
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.fdwwab on compute-0
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.fdwwab on compute-0
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.fdwwab-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb  2 04:41:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb  2 04:41:15 np0005604790 podman[97291]: 2026-02-02 09:41:15.736764804 +0000 UTC m=+0.053841098 container create 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:41:15 np0005604790 systemd[1]: Started libpod-conmon-527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3.scope.
Feb  2 04:41:15 np0005604790 podman[97291]: 2026-02-02 09:41:15.717013698 +0000 UTC m=+0.034090042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:15 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:15 np0005604790 podman[97291]: 2026-02-02 09:41:15.825458213 +0000 UTC m=+0.142534507 container init 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:41:15 np0005604790 podman[97291]: 2026-02-02 09:41:15.833200675 +0000 UTC m=+0.150276969 container start 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:15 np0005604790 podman[97291]: 2026-02-02 09:41:15.836614804 +0000 UTC m=+0.153691168 container attach 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:41:15 np0005604790 magical_easley[97308]: 167 167
Feb  2 04:41:15 np0005604790 systemd[1]: libpod-527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3.scope: Deactivated successfully.
Feb  2 04:41:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:15 np0005604790 podman[97313]: 2026-02-02 09:41:15.901284045 +0000 UTC m=+0.040874920 container died 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:41:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-93e0b8d41595efa5fdd4b11ad9565dab893d5e4d33e84bd3a6949eb9f8397ce5-merged.mount: Deactivated successfully.
Feb  2 04:41:15 np0005604790 podman[97313]: 2026-02-02 09:41:15.939049862 +0000 UTC m=+0.078640737 container remove 527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:15 np0005604790 systemd[1]: libpod-conmon-527091dfffa5e4a58cf1c4985dfbdd1b84b0a6672227a27a8ab55bdb454803c3.scope: Deactivated successfully.
Feb  2 04:41:16 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb  2 04:41:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb  2 04:41:16 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:16 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:16.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:16.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:16 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:16 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:16 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:16 np0005604790 ceph-mon[74489]: Rados config object exists: conf-nfs.cephfs
Feb  2 04:41:16 np0005604790 ceph-mon[74489]: Creating key for client.nfs.cephfs.2.0.compute-0.fdwwab-rgw
Feb  2 04:41:16 np0005604790 ceph-mon[74489]: Bind address in nfs.cephfs.2.0.compute-0.fdwwab's ganesha conf is defaulting to empty
Feb  2 04:41:16 np0005604790 ceph-mon[74489]: Deploying daemon nfs.cephfs.2.0.compute-0.fdwwab on compute-0
Feb  2 04:41:16 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:16 np0005604790 podman[97452]: 2026-02-02 09:41:16.862462289 +0000 UTC m=+0.043164790 container create 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:41:16 np0005604790 podman[97452]: 2026-02-02 09:41:16.842794224 +0000 UTC m=+0.023496745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:41:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2abf9157a619349a7106ff0239cd1f0ec34b0e136456ca92e243a98eefde34/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2abf9157a619349a7106ff0239cd1f0ec34b0e136456ca92e243a98eefde34/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2abf9157a619349a7106ff0239cd1f0ec34b0e136456ca92e243a98eefde34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2abf9157a619349a7106ff0239cd1f0ec34b0e136456ca92e243a98eefde34/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:16 np0005604790 podman[97452]: 2026-02-02 09:41:16.978864751 +0000 UTC m=+0.159567262 container init 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:41:16 np0005604790 podman[97452]: 2026-02-02 09:41:16.989841908 +0000 UTC m=+0.170544399 container start 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:41:16 np0005604790 bash[97452]: 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0
Feb  2 04:41:17 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev a0cd3058-d53f-4222-b614-b57dab5bca67 (Updating nfs.cephfs deployment (+3 -> 3))
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event a0cd3058-d53f-4222-b614-b57dab5bca67 (Updating nfs.cephfs deployment (+3 -> 3)) in 9 seconds
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:41:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb  2 04:41:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:41:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 90b9a1bc-ed22-434f-a0b1-cf741a79d3be (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.sryqbx on compute-1
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.sryqbx on compute-1
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1.8 KiB/s wr, 5 op/s
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 15 completed events
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Feb  2 04:41:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Feb  2 04:41:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:18.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:18 np0005604790 ceph-mon[74489]: Deploying daemon haproxy.nfs.cephfs.compute-1.sryqbx on compute-1
Feb  2 04:41:18 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 3.6 KiB/s wr, 12 op/s
Feb  2 04:41:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:20.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:20.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:21 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ooxkuo on compute-0
Feb  2 04:41:21 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ooxkuo on compute-0
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.631420854 +0000 UTC m=+0.039514794 container create 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb  2 04:41:21 np0005604790 systemd[1]: Started libpod-conmon-1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83.scope.
Feb  2 04:41:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.6109732 +0000 UTC m=+0.019067160 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.718274465 +0000 UTC m=+0.126368475 container init 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.72651891 +0000 UTC m=+0.134612890 container start 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.729462307 +0000 UTC m=+0.137556287 container attach 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 trusting_diffie[97631]: 0 0
Feb  2 04:41:21 np0005604790 systemd[1]: libpod-1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83.scope: Deactivated successfully.
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.734010246 +0000 UTC m=+0.142104186 container died 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-33bf1121474343c7cdc10e95f8ed8707d0c790a4bc4ba27d8e53dcdc784ce434-merged.mount: Deactivated successfully.
Feb  2 04:41:21 np0005604790 podman[97614]: 2026-02-02 09:41:21.774206147 +0000 UTC m=+0.182300087 container remove 1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83 (image=quay.io/ceph/haproxy:2.3, name=trusting_diffie)
Feb  2 04:41:21 np0005604790 systemd[1]: libpod-conmon-1425cf7ac8bc96099d01feff96e56eeee9cd942a67bb7fe91ad9347e598f5f83.scope: Deactivated successfully.
Feb  2 04:41:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: Deploying daemon haproxy.nfs.cephfs.compute-0.ooxkuo on compute-0
Feb  2 04:41:22 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:22 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:22 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:22.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c7c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:22 np0005604790 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ooxkuo for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:22 np0005604790 podman[97781]: 2026-02-02 09:41:22.617411438 +0000 UTC m=+0.058249294 container create 5812768ed72c0881a5b563a239565cde81bb05b6f1e1beebab3f203681cce03e (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo)
Feb  2 04:41:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c379911b585540d7d35830547936e5d60aedcf7c36282647d202eb626cf86d15/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:22 np0005604790 podman[97781]: 2026-02-02 09:41:22.585549325 +0000 UTC m=+0.026387191 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Feb  2 04:41:22 np0005604790 podman[97781]: 2026-02-02 09:41:22.687410518 +0000 UTC m=+0.128248374 container init 5812768ed72c0881a5b563a239565cde81bb05b6f1e1beebab3f203681cce03e (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo)
Feb  2 04:41:22 np0005604790 podman[97781]: 2026-02-02 09:41:22.692526691 +0000 UTC m=+0.133364527 container start 5812768ed72c0881a5b563a239565cde81bb05b6f1e1beebab3f203681cce03e (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo)
Feb  2 04:41:22 np0005604790 bash[97781]: 5812768ed72c0881a5b563a239565cde81bb05b6f1e1beebab3f203681cce03e
Feb  2 04:41:22 np0005604790 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ooxkuo for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [NOTICE] 032/094122 (2) : New worker #1 (4) forked
Feb  2 04:41:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094122 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:22 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.arssaq on compute-2
Feb  2 04:41:22 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.arssaq on compute-2
Feb  2 04:41:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb  2 04:41:23 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:23 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:23 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:23 np0005604790 ceph-mon[74489]: Deploying daemon haproxy.nfs.cephfs.compute-2.arssaq on compute-2
Feb  2 04:41:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:24.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.tgzfzm on compute-2
Feb  2 04:41:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.tgzfzm on compute-2
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:24 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.2 KiB/s wr, 8 op/s
Feb  2 04:41:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:25 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:25 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:25 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:25 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:25 np0005604790 ceph-mon[74489]: Deploying daemon keepalived.nfs.cephfs.compute-2.tgzfzm on compute-2
Feb  2 04:41:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:26.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Feb  2 04:41:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:27 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:28.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.pqolko on compute-0
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.pqolko on compute-0
Feb  2 04:41:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Feb  2 04:41:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: Deploying daemon keepalived.nfs.cephfs.compute-0.pqolko on compute-0
Feb  2 04:41:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:30.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:41:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:31 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:32.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.345709599 +0000 UTC m=+2.700309636 container create 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-type=git)
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.320931901 +0000 UTC m=+2.675531998 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb  2 04:41:32 np0005604790 systemd[1]: Started libpod-conmon-7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7.scope.
Feb  2 04:41:32 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.414897637 +0000 UTC m=+2.769497734 container init 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.426136961 +0000 UTC m=+2.780737008 container start 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, release=1793, architecture=x86_64)
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.430018212 +0000 UTC m=+2.784618299 container attach 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, release=1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2)
Feb  2 04:41:32 np0005604790 unruffled_ride[97999]: 0 0
Feb  2 04:41:32 np0005604790 systemd[1]: libpod-7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7.scope: Deactivated successfully.
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.431635365 +0000 UTC m=+2.786235412 container died 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=2.2.4, name=keepalived, release=1793, description=keepalived for Ceph, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb  2 04:41:32 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6686fb738a258e45485dcfbbc416f47cdace3c6bb902f4d1d79ac7583a9485f7-merged.mount: Deactivated successfully.
Feb  2 04:41:32 np0005604790 podman[97903]: 2026-02-02 09:41:32.473409137 +0000 UTC m=+2.828009144 container remove 7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7 (image=quay.io/ceph/keepalived:2.2.4, name=unruffled_ride, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-type=git, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Feb  2 04:41:32 np0005604790 systemd[1]: libpod-conmon-7133aeadf63134324ea03f647d195be90a65505fda9c92b0899faba5a9d288a7.scope: Deactivated successfully.
Feb  2 04:41:32 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:32 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:32 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:32 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:32 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:32 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:33 np0005604790 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.pqolko for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:33 np0005604790 podman[98147]: 2026-02-02 09:41:33.384236225 +0000 UTC m=+0.081072680 container create 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Feb  2 04:41:33 np0005604790 podman[98147]: 2026-02-02 09:41:33.337671778 +0000 UTC m=+0.034508273 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb  2 04:41:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5faed3b7ced04682e9a0e0c1398f6c3e98cb49d2549bdf345236866e797fcdf7/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:33 np0005604790 podman[98147]: 2026-02-02 09:41:33.481081267 +0000 UTC m=+0.177917782 container init 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb  2 04:41:33 np0005604790 podman[98147]: 2026-02-02 09:41:33.48656565 +0000 UTC m=+0.183402115 container start 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, name=keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, version=2.2.4, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public)
Feb  2 04:41:33 np0005604790 bash[98147]: 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173
Feb  2 04:41:33 np0005604790 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.pqolko for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Starting Keepalived v2.2.4 (08/21,2021)
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Configuration file /etc/keepalived/keepalived.conf
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Starting VRRP child process, pid=4
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: Startup complete
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: (VI_0) Entering BACKUP STATE (init)
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:33 2026: VRRP_Script(check_backend) succeeded
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.whrwoq on compute-1
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.whrwoq on compute-1
Feb  2 04:41:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:41:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:33 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c500016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:34.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:34 np0005604790 ceph-mon[74489]: Deploying daemon keepalived.nfs.cephfs.compute-1.whrwoq on compute-1
Feb  2 04:41:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:41:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:35 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:41:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:37 2026: (VI_0) Entering MASTER STATE
Feb  2 04:41:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:41:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:37 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 90b9a1bc-ed22-434f-a0b1-cf741a79d3be (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Feb  2 04:41:38 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 90b9a1bc-ed22-434f-a0b1-cf741a79d3be (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 22 seconds
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 336846f2-db75-4405-954d-2e4a967089f2 (Updating alertmanager deployment (+1 -> 1))
Feb  2 04:41:38 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Feb  2 04:41:38 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:38 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:41:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:41:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:41:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:41 np0005604790 ceph-mon[74489]: Deploying daemon alertmanager.compute-0 on compute-0
Feb  2 04:41:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:41:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:41 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:42.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:41:42 2026: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Feb  2 04:41:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:41:42 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 16 completed events
Feb  2 04:41:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:41:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:43 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:44.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:41:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:46.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.35153009 +0000 UTC m=+7.048840296 volume create 746f2327efc9d122c789ac6d35a2d0e6bed8ff031d98db642d16eb97aeb7559a
Feb  2 04:41:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.287588303 +0000 UTC m=+6.984898569 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.394128976 +0000 UTC m=+7.091439182 container create b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:46 np0005604790 systemd[1]: Started libpod-conmon-b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417.scope.
Feb  2 04:41:46 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68ee932b6fe18aa71b2ae9a076bfdc411f943291112d6e20580057ec6f1461e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.709406941 +0000 UTC m=+7.406717127 container init b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.718433112 +0000 UTC m=+7.415743318 container start b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:46 np0005604790 systemd[1]: libpod-b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417.scope: Deactivated successfully.
Feb  2 04:41:46 np0005604790 laughing_swanson[98409]: 65534 65534
Feb  2 04:41:46 np0005604790 conmon[98409]: conmon b51c8b6b7190818628ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417.scope/container/memory.events
Feb  2 04:41:46 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.798866148 +0000 UTC m=+7.496176364 container attach b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:46 np0005604790 podman[98264]: 2026-02-02 09:41:46.799880605 +0000 UTC m=+7.497190801 container died b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:47 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c68ee932b6fe18aa71b2ae9a076bfdc411f943291112d6e20580057ec6f1461e-merged.mount: Deactivated successfully.
Feb  2 04:41:47 np0005604790 podman[98264]: 2026-02-02 09:41:47.366063156 +0000 UTC m=+8.063373332 container remove b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417 (image=quay.io/prometheus/alertmanager:v0.25.0, name=laughing_swanson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:47 np0005604790 podman[98264]: 2026-02-02 09:41:47.535503488 +0000 UTC m=+8.232813664 volume remove 746f2327efc9d122c789ac6d35a2d0e6bed8ff031d98db642d16eb97aeb7559a
Feb  2 04:41:47 np0005604790 systemd[1]: libpod-conmon-b51c8b6b7190818628cef7f1738d038a760132e10993b73a7a34a8cf6afce417.scope: Deactivated successfully.
Feb  2 04:41:47 np0005604790 podman[98427]: 2026-02-02 09:41:47.63974009 +0000 UTC m=+0.085081102 volume create 9bf1ac288e68a5ce8c96f506df88081da76b8be8a4309d2cbf3f623d061d82cc
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:41:47
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.rgw.root']
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:41:47 np0005604790 podman[98427]: 2026-02-02 09:41:47.670754857 +0000 UTC m=+0.116095879 container create 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:47 np0005604790 podman[98427]: 2026-02-02 09:41:47.579986575 +0000 UTC m=+0.025327647 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Feb  2 04:41:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:41:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:47 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:41:47 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:41:47 np0005604790 systemd[1]: Started libpod-conmon-5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d.scope.
Feb  2 04:41:47 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0c0c51b609e625ca65112ce7b5509eeaec98bd72040ddde7ca959e36adea04d/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:47 np0005604790 podman[98427]: 2026-02-02 09:41:47.960279814 +0000 UTC m=+0.405620866 container init 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:47 np0005604790 podman[98427]: 2026-02-02 09:41:47.966516751 +0000 UTC m=+0.411857763 container start 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:47 np0005604790 pedantic_bhabha[98443]: 65534 65534
Feb  2 04:41:47 np0005604790 systemd[1]: libpod-5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d.scope: Deactivated successfully.
Feb  2 04:41:48 np0005604790 podman[98427]: 2026-02-02 09:41:48.024028696 +0000 UTC m=+0.469369758 container attach 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:48 np0005604790 podman[98427]: 2026-02-02 09:41:48.024449537 +0000 UTC m=+0.469790549 container died 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b0c0c51b609e625ca65112ce7b5509eeaec98bd72040ddde7ca959e36adea04d-merged.mount: Deactivated successfully.
Feb  2 04:41:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:48 np0005604790 podman[98427]: 2026-02-02 09:41:48.320329923 +0000 UTC m=+0.765670935 container remove 5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d (image=quay.io/prometheus/alertmanager:v0.25.0, name=pedantic_bhabha, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:48 np0005604790 podman[98427]: 2026-02-02 09:41:48.338941999 +0000 UTC m=+0.784283051 volume remove 9bf1ac288e68a5ce8c96f506df88081da76b8be8a4309d2cbf3f623d061d82cc
Feb  2 04:41:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:48 np0005604790 systemd[1]: libpod-conmon-5de9bd225abdcef7ec133bd404198ad217f87b1f69bd22e1391dc8dc10d0062d.scope: Deactivated successfully.
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:48 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094148 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:41:48 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:48 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb  2 04:41:48 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 80b60425-e3b5-458c-a697-8843054f7cc3 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:41:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:48 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:49 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:49 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:49 np0005604790 systemd[1]: Starting Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:49 np0005604790 podman[98591]: 2026-02-02 09:41:49.459127024 +0000 UTC m=+0.053222341 volume create 9b860f62a5bdfb20ba246fb3183f32e7905709367584801b9a37e9808e7953bd
Feb  2 04:41:49 np0005604790 podman[98591]: 2026-02-02 09:41:49.479755165 +0000 UTC m=+0.073850502 container create d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:49 np0005604790 podman[98591]: 2026-02-02 09:41:49.425663991 +0000 UTC m=+0.019759298 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:41:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aaf21e619ca7c96d0cf4441b8035c0c2a94c0f3e43ac03ac7a6d3f919758f98/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aaf21e619ca7c96d0cf4441b8035c0c2a94c0f3e43ac03ac7a6d3f919758f98/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:49 np0005604790 podman[98591]: 2026-02-02 09:41:49.594639631 +0000 UTC m=+0.188735038 container init d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:49 np0005604790 podman[98591]: 2026-02-02 09:41:49.603807585 +0000 UTC m=+0.197902882 container start d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:41:49 np0005604790 bash[98591]: d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034
Feb  2 04:41:49 np0005604790 systemd[1]: Started Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.637Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.637Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.651Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.653Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Feb  2 04:41:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 511 B/s wr, 2 op/s
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.694Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.695Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.700Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:49.700Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb  2 04:41:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:49 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:49 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 336846f2-db75-4405-954d-2e4a967089f2 (Updating alertmanager deployment (+1 -> 1))
Feb  2 04:41:49 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 336846f2-db75-4405-954d-2e4a967089f2 (Updating alertmanager deployment (+1 -> 1)) in 11 seconds
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:49 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 1a60858f-bf19-4959-977f-da9a5e58f4de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 274cc80d-b0c6-4343-a544-070d982742e0 (Updating grafana deployment (+1 -> 1))
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Feb  2 04:41:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Feb  2 04:41:50 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Feb  2 04:41:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:50.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Feb  2 04:41:50 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb  2 04:41:51 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 73dab081-3449-4827-9278-c3df34ac2279 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:51 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 17 completed events
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:51 np0005604790 ceph-mgr[74785]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Feb  2 04:41:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:51.653Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000120328s
Feb  2 04:41:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v42: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:41:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:51 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: Regenerating cephadm self-signed grafana TLS certificates
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: Deploying daemon grafana.compute-0 on compute-0
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb  2 04:41:52 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev a61ae489-9f2b-4fb6-af41-1bc1d897af1f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Feb  2 04:41:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:52 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 57 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=40/41 n=8 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57 pruub=12.263470650s) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 41'47 mlcod 41'47 active pruub 174.245849609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:41:52 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 57 pg[10.0( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57 pruub=12.263470650s) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 41'47 mlcod 0'0 unknown pruub 174.245849609s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev 95b1f61f-7b56-414c-8fe0-6301b4b92e02 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 80b60425-e3b5-458c-a697-8843054f7cc3 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 80b60425-e3b5-458c-a697-8843054f7cc3 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 1a60858f-bf19-4959-977f-da9a5e58f4de (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 1a60858f-bf19-4959-977f-da9a5e58f4de (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 73dab081-3449-4827-9278-c3df34ac2279 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 73dab081-3449-4827-9278-c3df34ac2279 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev a61ae489-9f2b-4fb6-af41-1bc1d897af1f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event a61ae489-9f2b-4fb6-af41-1bc1d897af1f (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 95b1f61f-7b56-414c-8fe0-6301b4b92e02 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 95b1f61f-7b56-414c-8fe0-6301b4b92e02 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.7( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.12( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.11( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.10( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.19( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.18( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.6( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.5( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.4( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.3( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.8( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.9( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.2( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.13( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.14( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.15( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.16( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.17( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.7( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1f( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1c( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.6( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1a( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.3( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.9( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.d( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1d( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.b( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.a( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.c( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 41'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.14( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.17( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.15( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.16( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 58 pg[10.e( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=40/40 les/c/f=41/41/0 sis=57) [1] r=0 lpr=57 pi=[40,57)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v45: 291 pgs: 62 unknown, 229 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb  2 04:41:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:53 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:54.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:54 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Feb  2 04:41:54 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:54 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:54 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 59 pg[12.0( v 54'63 (0'0,54'63] local-lis/les=51/52 n=5 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.678897858s) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 54'62 mlcod 54'62 active pruub 173.973190308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:41:54 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 59 pg[12.0( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.678897858s) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 54'62 mlcod 0'0 unknown pruub 173.973190308s@ mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1).collection(12.0_head 0x564f149d4fc0) operator()   moving buffer(0x564f154774c8 space 0x564f14e6fc80 0x0~1000 clean)
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb  2 04:41:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb  2 04:41:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb  2 04:41:55 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.11( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.10( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.13( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.12( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.15( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.4( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.7( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.6( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.8( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.a( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.c( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.9( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.b( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.e( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.d( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.5( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.f( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.2( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.3( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1e( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1f( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1c( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1a( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1b( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.18( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.19( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.16( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.14( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1( v 54'63 (0'0,54'63] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1d( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.17( v 54'63 lc 0'0 (0'0,54'63] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.11( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.10( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.13( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.15( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.12( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.4( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.7( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.6( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.8( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.a( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.9( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.b( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.d( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.5( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.3( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.2( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.0( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 54'62 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.f( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1a( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.18( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1f( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.19( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.16( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.14( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1d( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.17( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 60 pg[12.1b( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=54'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:41:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:41:55 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:55 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb  2 04:41:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:55 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:41:56 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 22 completed events
Feb  2 04:41:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:41:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:56.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:41:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:41:56 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb  2 04:41:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:56 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb  2 04:41:56 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:56 np0005604790 python3[98966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:56 np0005604790 podman[98721]: 2026-02-02 09:41:56.972301553 +0000 UTC m=+5.984428761 container create bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 systemd[1]: Started libpod-conmon-bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67.scope.
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:56.950611934 +0000 UTC m=+5.962739142 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:41:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.048901027 +0000 UTC m=+0.069644099 container create d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:57.064996867 +0000 UTC m=+6.077124155 container init bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:57.073275308 +0000 UTC m=+6.085402516 container start bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 systemd[1]: Started libpod-conmon-d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389.scope.
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:57.077419398 +0000 UTC m=+6.089546626 container attach bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 practical_ardinghelli[98995]: 472 0
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 conmon[98995]: conmon bba6b3afd0ec3070ca3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67.scope/container/memory.events
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:57.079681029 +0000 UTC m=+6.091808257 container died bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11b6bc308bc8ded258e9e7847eafd9f7058c8dd96718b57f11d6e25c25b5b9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c11b6bc308bc8ded258e9e7847eafd9f7058c8dd96718b57f11d6e25c25b5b9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e53a3b6c8caaed267b12cf87cc3d889f93f7852a2546830298621f5ee2eec712-merged.mount: Deactivated successfully.
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.019732389 +0000 UTC m=+0.040475461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.134693637 +0000 UTC m=+0.155436769 container init d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.145141986 +0000 UTC m=+0.165885058 container start d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:41:57 np0005604790 podman[98721]: 2026-02-02 09:41:57.145911876 +0000 UTC m=+6.158039084 container remove bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67 (image=quay.io/ceph/grafana:10.4.0, name=practical_ardinghelli, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.151780013 +0000 UTC m=+0.172523095 container attach d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-conmon-bba6b3afd0ec3070ca3e3e1fddbb99116670e763bdd3b5c9a2a4d7a25c797d67.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.259042145 +0000 UTC m=+0.090446715 container create c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.19701612 +0000 UTC m=+0.028420770 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:41:57 np0005604790 systemd[1]: Started libpod-conmon-c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20.scope.
Feb  2 04:41:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.31429464 +0000 UTC m=+0.145699220 container init c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.317885846 +0000 UTC m=+0.149290416 container start c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 admiring_bell[99108]: 472 0
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.321619555 +0000 UTC m=+0.153024125 container attach c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.322132339 +0000 UTC m=+0.153536909 container died c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-76deaa4c88532542109568444623bfb80cca01e945b695e109e83ddde0a03669-merged.mount: Deactivated successfully.
Feb  2 04:41:57 np0005604790 podman[99017]: 2026-02-02 09:41:57.364852609 +0000 UTC m=+0.196257179 container remove c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20 (image=quay.io/ceph/grafana:10.4.0, name=admiring_bell, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-conmon-c2b9725812fa52f2e0bb3b7281183e806fa7f9e17873fa50b9aee1eba309db20.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 zealous_shamir[99002]: could not fetch user info: no user info saved
Feb  2 04:41:57 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb  2 04:41:57 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.450445153 +0000 UTC m=+0.471188225 container died d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:41:57 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:57 np0005604790 podman[98981]: 2026-02-02 09:41:57.494379876 +0000 UTC m=+0.515122908 container remove d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389 (image=quay.io/ceph/ceph:v19, name=zealous_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:57 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:57 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:41:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c11b6bc308bc8ded258e9e7847eafd9f7058c8dd96718b57f11d6e25c25b5b9d-merged.mount: Deactivated successfully.
Feb  2 04:41:57 np0005604790 systemd[1]: libpod-conmon-d06a9207eede1d94348b40dc540a2e0f10bac8d4d273d284410c6896cd601389.scope: Deactivated successfully.
Feb  2 04:41:57 np0005604790 systemd[1]: Reloading.
Feb  2 04:41:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:57 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:57 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:41:57 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:41:57 np0005604790 python3[99209]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid d241d473-9fcb-5f74-b163-f1ca4454e7f1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:41:57 np0005604790 podman[99248]: 2026-02-02 09:41:57.974416967 +0000 UTC m=+0.074627482 container create 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:41:58 np0005604790 podman[99248]: 2026-02-02 09:41:57.934933363 +0000 UTC m=+0.035143938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:41:58 np0005604790 systemd[1]: Started libpod-conmon-67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a.scope.
Feb  2 04:41:58 np0005604790 systemd[1]: Starting Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:41:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a508463891305486dba05a23a0e01d88a2a2d8e8e55da0eb38bd16f5618edcb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a508463891305486dba05a23a0e01d88a2a2d8e8e55da0eb38bd16f5618edcb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:58 np0005604790 podman[99248]: 2026-02-02 09:41:58.069789122 +0000 UTC m=+0.169999677 container init 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:41:58 np0005604790 podman[99248]: 2026-02-02 09:41:58.079636025 +0000 UTC m=+0.179846540 container start 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:41:58 np0005604790 podman[99248]: 2026-02-02 09:41:58.091775469 +0000 UTC m=+0.191986034 container attach 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:41:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:41:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:41:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:41:58 np0005604790 great_hawking[99266]: {
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "user_id": "openstack",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "display_name": "openstack",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "email": "",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "suspended": 0,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "max_buckets": 1000,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "subusers": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "keys": [
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        {
Feb  2 04:41:58 np0005604790 great_hawking[99266]:            "user": "openstack",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:            "access_key": "GEA58LQEJ0Q31REJSY4K",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:            "secret_key": "JxoGOf86IxDFCzsQXQqXbF3WlDGFnZJeq14UxLtb",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:            "active": true,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:            "create_date": "2026-02-02T09:41:58.265439Z"
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        }
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    ],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "swift_keys": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "caps": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "op_mask": "read, write, delete",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "default_placement": "",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "default_storage_class": "",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "placement_tags": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "bucket_quota": {
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "enabled": false,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "check_on_raw": false,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_size": -1,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_size_kb": 0,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_objects": -1
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    },
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "user_quota": {
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "enabled": false,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "check_on_raw": false,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_size": -1,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_size_kb": 0,
Feb  2 04:41:58 np0005604790 great_hawking[99266]:        "max_objects": -1
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    },
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "temp_url_keys": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "type": "rgw",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "mfa_ids": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "account_id": "",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "path": "/",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "create_date": "2026-02-02T09:41:58.264932Z",
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "tags": [],
Feb  2 04:41:58 np0005604790 great_hawking[99266]:    "group_ids": []
Feb  2 04:41:58 np0005604790 great_hawking[99266]: }
Feb  2 04:41:58 np0005604790 great_hawking[99266]: 
Feb  2 04:41:58 np0005604790 podman[99388]: 2026-02-02 09:41:58.285090298 +0000 UTC m=+0.052345828 container create 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:41:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:41:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:41:58.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:41:58 np0005604790 systemd[1]: libpod-67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a.scope: Deactivated successfully.
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Feb  2 04:41:58 np0005604790 podman[99388]: 2026-02-02 09:41:58.347265258 +0000 UTC m=+0.114520778 container init 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:58 np0005604790 podman[99388]: 2026-02-02 09:41:58.260510052 +0000 UTC m=+0.027765632 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:41:58 np0005604790 podman[99388]: 2026-02-02 09:41:58.355712643 +0000 UTC m=+0.122968163 container start 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:58 np0005604790 bash[99388]: 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634
Feb  2 04:41:58 np0005604790 systemd[1]: Started Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:41:58 np0005604790 podman[99414]: 2026-02-02 09:41:58.43989578 +0000 UTC m=+0.090183978 container died 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 04:41:58 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb  2 04:41:58 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.569431267Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-02-02T09:41:58Z
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.569970561Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.569984882Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.569992572Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.569998922Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570012222Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570020533Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570030403Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570040323Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570058474Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570070044Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570078694Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570087704Z level=info msg=Target target=[all]
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570108145Z level=info msg="Path Home" path=/usr/share/grafana
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570117225Z level=info msg="Path Data" path=/var/lib/grafana
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570126305Z level=info msg="Path Logs" path=/var/log/grafana
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570135066Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570167597Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=settings t=2026-02-02T09:41:58.570174807Z level=info msg="App mode production"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore t=2026-02-02T09:41:58.570889266Z level=info msg="Connecting to DB" dbtype=sqlite3
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore t=2026-02-02T09:41:58.570915676Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.574397439Z level=info msg="Starting DB migrations"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.576045973Z level=info msg="Executing migration" id="create migration_log table"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.577647416Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.600873ms
Feb  2 04:41:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a508463891305486dba05a23a0e01d88a2a2d8e8e55da0eb38bd16f5618edcb4-merged.mount: Deactivated successfully.
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.604811431Z level=info msg="Executing migration" id="create user table"
Feb  2 04:41:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.60664473Z level=info msg="Migration successfully executed" id="create user table" duration=1.836769ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.616803441Z level=info msg="Executing migration" id="add unique index user.login"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.618874036Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=2.070665ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.622614006Z level=info msg="Executing migration" id="add unique index user.email"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.623995033Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.377787ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.634788911Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.636286481Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.50163ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.648400564Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.649209326Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=809.292µs
Feb  2 04:41:58 np0005604790 podman[99414]: 2026-02-02 09:41:58.650984803 +0000 UTC m=+0.301272931 container remove 67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a (image=quay.io/ceph/ceph:v19, name=great_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:41:58 np0005604790 systemd[1]: libpod-conmon-67641d014b67ad501fe1b7137e484a577fd86a264594c007f2ba183e4da9098a.scope: Deactivated successfully.
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.657064716Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.661572856Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.508121ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.66807903Z level=info msg="Executing migration" id="create user table v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.669353994Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.277764ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.673763661Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.674950803Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.188312ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.678853667Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.680030539Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.177251ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.682688989Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.683275985Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=588.216µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.686710187Z level=info msg="Executing migration" id="Drop old table user_v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.687446616Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=736.299µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.709059043Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.710969784Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.901371ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.714283663Z level=info msg="Executing migration" id="Update user table charset"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.714318414Z level=info msg="Migration successfully executed" id="Update user table charset" duration=35.601µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.720097398Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.721715591Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.617423ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.727632919Z level=info msg="Executing migration" id="Add missing user data"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.728068581Z level=info msg="Migration successfully executed" id="Add missing user data" duration=435.471µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.734282626Z level=info msg="Executing migration" id="Add is_disabled column to user"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.736030253Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.747647ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.738392826Z level=info msg="Executing migration" id="Add index user.login/user.email"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.739604528Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.211602ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.750835678Z level=info msg="Executing migration" id="Add is_service_account column to user"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.753190121Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.355193ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.759791947Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.774162041Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=14.358423ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.7823982Z level=info msg="Executing migration" id="Add uid column to user"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.785157874Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.755464ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.789100639Z level=info msg="Executing migration" id="Update uid column values for users"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.78988133Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=780.911µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.793719303Z level=info msg="Executing migration" id="Add unique index user_uid"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.795420588Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.702086ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.824133934Z level=info msg="Executing migration" id="create temp user table v1-7"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.825260974Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.13032ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.827744221Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.828517561Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=770.95µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.83072903Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.831267435Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=538.455µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.833236157Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.833741241Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=505.024µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.842305779Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.842907725Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=602.186µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.847895328Z level=info msg="Executing migration" id="Update temp_user table charset"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.847914189Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=19.881µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.881862745Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.883317754Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.460449ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.896465575Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.89704255Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=575.055µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.902439584Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.903810201Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.370577ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.906174214Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.907798997Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.623763ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.912947005Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Feb  2 04:41:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.918650427Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.707453ms
Feb  2 04:41:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.92402225Z level=info msg="Executing migration" id="create temp_user v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.925822278Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.804108ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.968037445Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.969862534Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.830078ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.974772685Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.97571061Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=937.756µs
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.98173172Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.983144748Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.412058ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.988247084Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.98960466Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.356966ms
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.995750524Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Feb  2 04:41:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:58.996375071Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=628.717µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.008833964Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.009909302Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.074799ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.017851374Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.01842983Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=579.356µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.028248752Z level=info msg="Executing migration" id="create star table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.030935513Z level=info msg="Migration successfully executed" id="create star table" duration=2.688631ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.035734952Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.037005965Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.270714ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.042195674Z level=info msg="Executing migration" id="create org table v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.043442657Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.249073ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.047227868Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.048468521Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.238993ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.051886383Z level=info msg="Executing migration" id="create org_user table v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.053331501Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.446939ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.069804241Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.071346842Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.543941ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.105171535Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.106556782Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.386007ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.114427612Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.115201352Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=773.49µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.123224796Z level=info msg="Executing migration" id="Update org table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.123321409Z level=info msg="Migration successfully executed" id="Update org table charset" duration=98.423µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.131757974Z level=info msg="Executing migration" id="Update org_user table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.131812726Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=52.631µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.139963313Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.140307452Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=344.249µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.147366021Z level=info msg="Executing migration" id="create dashboard table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.14882196Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.46335ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.153132985Z level=info msg="Executing migration" id="add index dashboard.account_id"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.154057709Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=924.304µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.158675112Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.159507185Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=809.252µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.16496032Z level=info msg="Executing migration" id="create dashboard_tag table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.165666939Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=706.009µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.176231571Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.177586357Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.354036ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.180643659Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.181373698Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=729.809µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.184695887Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.190125092Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.426215ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.196847331Z level=info msg="Executing migration" id="create dashboard v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.198144606Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.296275ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.207349642Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.208218455Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=868.954µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.213393783Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.214205145Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=809.621µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.222869006Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.223724369Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=856.112µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.226580425Z level=info msg="Executing migration" id="drop table dashboard_v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.227562701Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=977.946µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.229936924Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.230010606Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=74.112µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.232650487Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.234391643Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.743796ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.245685545Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.247578695Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.88871ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.254200862Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.256140214Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.937512ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.260301075Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.26123066Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=928.594µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.267077016Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.268807602Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.730407ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.272721276Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.273657491Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=936.685µs
Feb  2 04:41:59 np0005604790 python3[99469]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.282482097Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.28374339Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.263553ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.289472673Z level=info msg="Executing migration" id="Update dashboard table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.289528265Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=59.392µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.29685885Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.296905262Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=49.892µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.303023945Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.305721137Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.699412ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.311518492Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.313786262Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.267981ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.317394048Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.319062453Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.663335ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.325302179Z level=info msg="Executing migration" id="Add column uid in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.327302943Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.003304ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.334206827Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.33470869Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=505.303µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.342244861Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.343996807Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.755987ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.349508174Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.350716187Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.229862ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.355019822Z level=info msg="Executing migration" id="Update dashboard title length"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.355058413Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=39.852µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.359858881Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.361892655Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=2.032304ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.370416702Z level=info msg="Executing migration" id="create dashboard_provisioning"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.37296308Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.552969ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.388713541Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.394917476Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.208325ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.399230021Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.399965171Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=736.65µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.421906217Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.422734249Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=828.272µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.439704972Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:44444] [GET] [200] [0.122s] [6.3K] [3b33c382-6f8f-44c6-ae61-afd5567241d9] /
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.440532304Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=826.512µs
Feb  2 04:41:59 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.454740863Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.455301048Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=560.875µs
Feb  2 04:41:59 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.474733686Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.475657901Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=925.445µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.498985234Z level=info msg="Executing migration" id="Add check_sum column"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.502777925Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.793892ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.513555473Z level=info msg="Executing migration" id="Add index for dashboard_title"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.515008241Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.454349ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.521467974Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.521846594Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=379.22µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.53558396Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.536009362Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=429.432µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.543087611Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.544307083Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.219642ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.566005812Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.568574421Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.565829ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.573988255Z level=info msg="Executing migration" id="create data_source table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.575167167Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.182592ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.585482122Z level=info msg="Executing migration" id="add index data_source.account_id"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.586718065Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.241123ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.593534037Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.595013916Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.483389ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.598734506Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.599811944Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.080058ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.60751485Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.608728362Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.217612ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.615667318Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.626850496Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.179858ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.638056725Z level=info msg="Executing migration" id="create data_source table v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.639119044Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.065149ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.64348193Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.644743294Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.264194ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.64724134Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.648049492Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=808.842µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.650169828Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.650854577Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=684.329µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.655417959Z level=info msg="Executing migration" id="Add column with_credentials"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:41:59.656Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003220751s
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.658726077Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.314489ms
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 3 op/s
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.661808749Z level=info msg="Executing migration" id="Add secure json data column"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.664743387Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.937528ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.670308376Z level=info msg="Executing migration" id="Update data_source table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.670355287Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=44.371µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.679110381Z level=info msg="Executing migration" id="Update initial version to 1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.6794442Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=336.659µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.689924619Z level=info msg="Executing migration" id="Add read_only data column"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.698156719Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.37799ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.708587567Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.709022559Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=438.752µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.713154049Z level=info msg="Executing migration" id="Update json_data with nulls"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.713690104Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=540.015µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.719282983Z level=info msg="Executing migration" id="Add uid column"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.722351875Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.068302ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.724935564Z level=info msg="Executing migration" id="Update uid value"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.725252882Z level=info msg="Migration successfully executed" id="Update uid value" duration=314.748µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.727741249Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.728784317Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.041287ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.737094238Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.737980632Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=886.684µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.741900657Z level=info msg="Executing migration" id="create api_key table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.743031037Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.130411ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.750478735Z level=info msg="Executing migration" id="add index api_key.account_id"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.751452481Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=974.076µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.75403278Z level=info msg="Executing migration" id="add index api_key.key"
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.759767003Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=5.729413ms
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.766875783Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.767931871Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.057238ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.77276351Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.773698335Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=936.715µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.778457702Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.779334646Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=874.643µs
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev 274cc80d-b0c6-4343-a544-070d982742e0 (Updating grafana deployment (+1 -> 1))
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event 274cc80d-b0c6-4343-a544-070d982742e0 (Updating grafana deployment (+1 -> 1)) in 10 seconds
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:41:59 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.791419798Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.792931678Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.51438ms
Feb  2 04:41:59 np0005604790 python3[99494]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.799172565Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev ebe0ecfa-b939-4ce8-9579-137d3d63efb5 (Updating ingress.rgw.default deployment (+2 -> 4))
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.81058695Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.409454ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.821230994Z level=info msg="Executing migration" id="create api_key table v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.822996981Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.767817ms
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.tapsuz on compute-2
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.tapsuz on compute-2
Feb  2 04:41:59 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:44456] [GET] [200] [0.003s] [6.3K] [7972fbae-ff5f-410a-a96d-1737782e1f13] /
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.85517325Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.856711741Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.542052ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.863583984Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.864472508Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=884.804µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.868370562Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.869898463Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.52792ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.902283757Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.903186691Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=908.704µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.906857509Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.907961088Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.107999ms
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.946751723Z level=info msg="Executing migration" id="Update api_key table charset"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.946818145Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=72.422µs
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.953344569Z level=info msg="Executing migration" id="Add expires to api_key table"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.957441659Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.099639ms
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.968758041Z level=info msg="Executing migration" id="Add service account foreign key"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.973170429Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.412187ms
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:41:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.991086487Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Feb  2 04:41:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:41:59.991694283Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=614.807µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.011531382Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.014586684Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.057232ms
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.036367285Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.043117365Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=6.7505ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.05380032Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.05566178Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.8624ms
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.19( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.8( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.17( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.11( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.566754341s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.931884766s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.11( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.566682816s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.931884766s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.13( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.573316574s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939117432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.15( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.503010750s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 178.868820190s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.13( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.573264122s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939117432s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.15( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.502945900s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 178.868820190s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.10( v 60'66 (0'0,60'66] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.573811531s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'64 lcod 60'65 mlcod 60'65 active pruub 180.939102173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.10( v 60'66 (0'0,60'66] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.573048592s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=60'64 lcod 60'65 mlcod 0'0 unknown NOTIFY pruub 180.939102173s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.068120763Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.12( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572997093s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939254761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.14( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.502441406s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 178.868698120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.12( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572965622s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939254761s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.14( v 60'57 (0'0,60'57] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.502368927s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 178.868698120s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.502175331s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.868560791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.502145767s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.868560791s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.4( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572593689s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939270020s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.501748085s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.868438721s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.4( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572570801s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939270020s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.501722336s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.868438721s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.500726700s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.867523193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.500667572s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.867523193s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.7( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572331429s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939285278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.7( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572299004s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939285278s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.6( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572061539s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939331055s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.069173331Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.052379ms
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.9( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572139740s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939422607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.9( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572113991s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939422607s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.6( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.572025299s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939331055s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.a( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571848869s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939376831s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.499870300s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.867401123s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.a( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571821213s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939376831s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.8( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571750641s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939361572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571764946s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939392090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.499800682s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.867401123s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571743965s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939392090s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.8( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571713448s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939361572s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.496074677s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.864013672s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.b( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571512222s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939468384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571433067s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939422607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.496047974s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.864013672s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.b( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571470261s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939468384s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571410179s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939422607s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495641708s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863967896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.3( v 60'57 (0'0,60'57] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495607376s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 60'56 active pruub 178.863967896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.3( v 60'57 (0'0,60'57] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495551109s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=60'57 lcod 60'56 mlcod 0'0 unknown NOTIFY pruub 178.863967896s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.2( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571255684s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939712524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495456696s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863937378s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495539665s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863967896s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=57/58 n=1 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495429993s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863937378s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.2( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.571220398s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939712524s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.3( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570979118s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939666748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495068550s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863937378s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.3( v 54'63 (0'0,54'63] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570935249s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939666748s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.495039940s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863937378s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570801735s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1e( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570766449s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939758301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.494275093s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863357544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.494227409s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863357544s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570515633s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1a( v 60'66 (0'0,60'66] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570495605s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'64 lcod 60'65 mlcod 60'65 active pruub 180.939758301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1c( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570487976s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939758301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1a( v 60'66 (0'0,60'66] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570455551s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'64 lcod 60'65 mlcod 0'0 unknown NOTIFY pruub 180.939758301s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.493800163s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863250732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.493755341s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863250732s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.493400574s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863113403s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.19( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570162773s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.19( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570105553s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939849854s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.18( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.570035934s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939834595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.493365288s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863113403s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.494015694s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863830566s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.18( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.569986343s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939834595s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.493994713s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863830566s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.17( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.569942474s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939910889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.17( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.569923401s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939910889s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.489233971s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.859359741s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1d( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.569734573s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 active pruub 180.939910889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[12.1d( v 54'63 (0'0,54'63] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.569713593s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=54'63 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.939910889s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.492869377s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 178.863128662s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.492835045s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.863128662s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 61 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=57/58 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=9.488814354s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.859359741s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.074085692Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.075948801Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.864489ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.093196532Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.094906897Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.710086ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.119601386Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.121776924Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=2.179138ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.160803416Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.162655755Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.854029ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.176213427Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.176414443Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=203.065µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.18157781Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.181870608Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=298.668µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.184526799Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.189364208Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.836959ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.191909366Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.194581257Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.672301ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.19766766Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.197782203Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=115.403µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.199775196Z level=info msg="Executing migration" id="create quota table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.201115802Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.340476ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.204205124Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.20516547Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=960.466µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.20668439Z level=info msg="Executing migration" id="Update quota table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.206751792Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=68.352µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.209631859Z level=info msg="Executing migration" id="create plugin_setting table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.21040757Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=775.301µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.213083921Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.213952454Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=869.023µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.216560154Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.219559184Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.99491ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.223245112Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.223405717Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=165.395µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.226080168Z level=info msg="Executing migration" id="create session table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.22728774Z level=info msg="Migration successfully executed" id="create session table" duration=1.207522ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.230194128Z level=info msg="Executing migration" id="Drop old table playlist table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.230402763Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=203.625µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.233393053Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.233851285Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=459.272µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.244090619Z level=info msg="Executing migration" id="create playlist table v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.246100502Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=2.012083ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.25012484Z level=info msg="Executing migration" id="create playlist item table v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.251789614Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.664494ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.255732939Z level=info msg="Executing migration" id="Update playlist table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.255841422Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=110.213µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.257781804Z level=info msg="Executing migration" id="Update playlist_item table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.257897637Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=116.873µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.26098989Z level=info msg="Executing migration" id="Add playlist column created_at"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.26625527Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.2648ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.269967849Z level=info msg="Executing migration" id="Add playlist column updated_at"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.275033675Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=5.067975ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.278024014Z level=info msg="Executing migration" id="drop preferences table v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.278306012Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=281.688µs
Feb  2 04:42:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:00.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.28121878Z level=info msg="Executing migration" id="drop preferences table v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.281425605Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=207.545µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.284759484Z level=info msg="Executing migration" id="create preferences table v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.28611628Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.356126ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.288849823Z level=info msg="Executing migration" id="Update preferences table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.288953746Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=105.403µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.291328589Z level=info msg="Executing migration" id="Add column team_id in preferences"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.296339093Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.009534ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.298301366Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.298695046Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=393.571µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.301994404Z level=info msg="Executing migration" id="Add column week_start in preferences"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.305638721Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.644817ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.307665995Z level=info msg="Executing migration" id="Add column preferences.json_data"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.311033385Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.36859ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.314695503Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.314822576Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=126.673µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.316880201Z level=info msg="Executing migration" id="Add preferences index org_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.317986231Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.10539ms
Feb  2 04:42:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.322240694Z level=info msg="Executing migration" id="Add preferences index user_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.323384325Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.145321ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.325736608Z level=info msg="Executing migration" id="create alert table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.327112724Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.375446ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.330976928Z level=info msg="Executing migration" id="add index alert org_id & id "
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.332061377Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.083929ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.333906846Z level=info msg="Executing migration" id="add index alert state"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.33482574Z level=info msg="Migration successfully executed" id="add index alert state" duration=918.484µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.338878048Z level=info msg="Executing migration" id="add index alert dashboard_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.339761372Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=883.254µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.345097054Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.345841024Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=743.46µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.348826504Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.349834321Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.010017ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.355027299Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.355891382Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=864.183µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.357969228Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.367869502Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.899634ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.370854392Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.371889029Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.031787ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.373804311Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.374719845Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=913.794µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.376596595Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.376926924Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=330.559µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.378735282Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.379293497Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=558.205µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.381196798Z level=info msg="Executing migration" id="create alert_notification table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.382097472Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=921.915µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.384172837Z level=info msg="Executing migration" id="Add column is_default"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.388201165Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.027038ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.390085185Z level=info msg="Executing migration" id="Add column frequency"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.393854986Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.770001ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.395805098Z level=info msg="Executing migration" id="Add column send_reminder"
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.400155194Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.349036ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.402511907Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.406701989Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.178011ms
Feb  2 04:42:00 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.411298481Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.412348479Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.052828ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.414753563Z level=info msg="Executing migration" id="Update alert table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.414821345Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=72.362µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.419386737Z level=info msg="Executing migration" id="Update alert_notification table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.419452569Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=66.772µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.423695252Z level=info msg="Executing migration" id="create notification_journal table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.424390771Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=700.509µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.426597589Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.427276428Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=678.989µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.429682852Z level=info msg="Executing migration" id="drop alert_notification_journal"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.43035212Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=669.928µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.43223518Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.432957299Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=718.929µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.435674802Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.43636533Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=690.608µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.440671805Z level=info msg="Executing migration" id="Add for to alert table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.443430669Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.758874ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.445023761Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.44760506Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.581119ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.449428479Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.449631624Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=202.995µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.452002007Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.452642855Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=640.497µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.454416082Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.455056509Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=640.067µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.457650878Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.460292749Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.640831ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.462479867Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.46257554Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=96.013µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.464504281Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.465149528Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=645.667µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.466819733Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.467536162Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=715.569µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.470659535Z level=info msg="Executing migration" id="Drop old annotation table v4"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.470786989Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=126.434µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.472808313Z level=info msg="Executing migration" id="create annotation table v5"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.473626325Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=817.912µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.475410732Z level=info msg="Executing migration" id="add index annotation 0 v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.476139792Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=728.59µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.477785146Z level=info msg="Executing migration" id="add index annotation 1 v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.478393602Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=608.557µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.483726134Z level=info msg="Executing migration" id="add index annotation 2 v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.484368201Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=638.967µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.487411432Z level=info msg="Executing migration" id="add index annotation 3 v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.488133472Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=724.81µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.52328118Z level=info msg="Executing migration" id="add index annotation 4 v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.524249825Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=974.846µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.538631309Z level=info msg="Executing migration" id="Update annotation table charset"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.538690381Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=101.323µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.552724655Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.555620763Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=2.896658ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.60721913Z level=info msg="Executing migration" id="Drop category_id index"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.609146431Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.932581ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.616282032Z level=info msg="Executing migration" id="Add column tags to annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.623158065Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.876423ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.638537756Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.639289126Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=757.341µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.672882312Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.674777183Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.901421ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.737554308Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.739609253Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=2.058705ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.750990687Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.769994444Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=19.005567ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.775557202Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.777032102Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.47365ms
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: Deploying daemon keepalived.rgw.default.compute-2.tapsuz on compute-2
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.794761055Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.795619078Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=860.543µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.812272742Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.812529839Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=256.617µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.821510239Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.822275089Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=795.911µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.8294301Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.829712938Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=284.408µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.864413314Z level=info msg="Executing migration" id="Add created time to annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.871073152Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.655187ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.893423358Z level=info msg="Executing migration" id="Add updated time to annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.89764219Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.213932ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.919075303Z level=info msg="Executing migration" id="Add index for created in annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.920417399Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.344535ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.940517055Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.94218815Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.669604ms
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.964437333Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.965032719Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=597.126µs
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.970763672Z level=info msg="Executing migration" id="Add epoch_end column"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.977641336Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.876184ms
Feb  2 04:42:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.985736262Z level=info msg="Executing migration" id="Add index for epoch_end"
Feb  2 04:42:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:00.987609612Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.8744ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.008609682Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.009108336Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=503.573µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.019810141Z level=info msg="Executing migration" id="Move region to single row"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.020649974Z level=info msg="Migration successfully executed" id="Move region to single row" duration=843.183µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.048386164Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.050165011Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.739096ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.055787371Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.058127634Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=2.338963ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.064019291Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.065734337Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.731507ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.083578943Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.08573306Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=2.153777ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.098633265Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.100364111Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.738236ms
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.14( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.111684363Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.113691937Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=2.015864ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.126723714Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.126837207Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=115.503µs
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.17( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.8( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.5( v 60'1 lc 0'0 (0'0,60'1] local-lis/les=61/62 n=1 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=60'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.4( v 37'12 (0'0,37'12] local-lis/les=61/62 n=1 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.1b( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.18( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.12( v 37'12 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.10( v 37'12 lc 37'2 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[8.19( v 37'12 lc 0'0 (0'0,37'12] local-lis/les=61/62 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=37'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.136222928Z level=info msg="Executing migration" id="create test_data table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.138335254Z level=info msg="Migration successfully executed" id="create test_data table" duration=2.111676ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.155373059Z level=info msg="Executing migration" id="create dashboard_version table v1"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.157367882Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=2.001843ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.167474562Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.169172657Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.698165ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.175475616Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.177118449Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.649744ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.183370226Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.183593162Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=223.246µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.185303658Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.185727059Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=422.631µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.195251153Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.195326095Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=75.702µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.203585916Z level=info msg="Executing migration" id="create team table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.204410018Z level=info msg="Migration successfully executed" id="create team table" duration=824.522µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.207585973Z level=info msg="Executing migration" id="add index team.org_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.208642581Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.056498ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.212074872Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.213214143Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.140271ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.21762622Z level=info msg="Executing migration" id="Add column uid in team"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.222760618Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.133417ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.226511438Z level=info msg="Executing migration" id="Update uid column values in team"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.226697433Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=213.315µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.229871877Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.230915515Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.043788ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.232742414Z level=info msg="Executing migration" id="create team member table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.233563096Z level=info msg="Migration successfully executed" id="create team member table" duration=820.362µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.236108724Z level=info msg="Executing migration" id="add index team_member.org_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.237062089Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=952.635µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.24010342Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.241100067Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=996.407µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.247277032Z level=info msg="Executing migration" id="add index team_member.team_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.248434013Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.157291ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.2509645Z level=info msg="Executing migration" id="Add column email to team table"
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.259286062Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=8.279651ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.26218063Z level=info msg="Executing migration" id="Add column external to team_member table"
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.269302) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321269417, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7093, "num_deletes": 255, "total_data_size": 12931611, "memory_usage": 13448368, "flush_reason": "Manual Compaction"}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.270506762Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=8.300992ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.27532304Z level=info msg="Executing migration" id="Add column permission to team_member table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.2786767Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.35101ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.280676703Z level=info msg="Executing migration" id="create dashboard acl table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.281413353Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=736.02µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.284746532Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.285514862Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=767.89µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.288182413Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.288980005Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=797.362µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.291578224Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.292341034Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=762.34µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.38134427Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.383472747Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=2.134347ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321517729, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11488606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7230, "table_properties": {"data_size": 11463016, "index_size": 16196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80650, "raw_average_key_size": 24, "raw_value_size": 11399397, "raw_average_value_size": 3431, "num_data_blocks": 712, "num_entries": 3322, "num_filter_entries": 3322, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025065, "oldest_key_time": 1770025065, "file_creation_time": 1770025321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 248526 microseconds, and 24910 cpu microseconds.
Feb  2 04:42:01 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 23 completed events
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.611544273Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.613159266Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.618793ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.517823) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11488606 bytes OK
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.517857) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.616365) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.617030) EVENT_LOG_v1 {"time_micros": 1770025321617013, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.617075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12899549, prev total WAL file size 12910188, number of live WAL files 2.
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.623691507Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.625347532Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.661104ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.625209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323535' seq:0, type:0; will stop at (end)
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(10MB) 13(57KB) 8(1944B)]
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321625356, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 11549040, "oldest_snapshot_seqno": -1}
Feb  2 04:42:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 3 op/s
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.752890466Z level=info msg="Executing migration" id="add index dashboard_permission"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.754085227Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.198182ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:01 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3140 keys, 11531016 bytes, temperature: kUnknown
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321809623, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 11531016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11505751, "index_size": 16324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 79524, "raw_average_key_size": 25, "raw_value_size": 11443738, "raw_average_value_size": 3644, "num_data_blocks": 718, "num_entries": 3140, "num_filter_entries": 3140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770025321, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.81074546Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.811802818Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.055748ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:01 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event e978b83a-1608-44df-93ee-50356cbb9aef (Global Recovery Event) in 11 seconds
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.810003) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 11531016 bytes
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.820634) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.6 rd, 62.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.0, 0.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3431, records dropped: 291 output_compression: NoCompression
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.820686) EVENT_LOG_v1 {"time_micros": 1770025321820667, "job": 4, "event": "compaction_finished", "compaction_time_micros": 184383, "compaction_time_cpu_micros": 26881, "output_level": 6, "num_output_files": 1, "total_output_size": 11531016, "num_input_records": 3431, "num_output_records": 3140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321822576, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321822671, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025321822726, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:01.625049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.824061945Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.82462164Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=556.275µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.830331062Z level=info msg="Executing migration" id="create tag table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.831454292Z level=info msg="Migration successfully executed" id="create tag table" duration=1.12276ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.835279164Z level=info msg="Executing migration" id="add index tag.key_value"
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.836378924Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.09939ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.841381987Z level=info msg="Executing migration" id="create login attempt table"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.842262791Z level=info msg="Migration successfully executed" id="create login attempt table" duration=880.984µs
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.848898198Z level=info msg="Executing migration" id="add index login_attempt.username"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.850288425Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.392827ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.917650683Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.918816194Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.166651ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.925063561Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.936826245Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.758383ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.982634057Z level=info msg="Executing migration" id="create login_attempt v2"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.983769277Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.13866ms
Feb  2 04:42:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.989161821Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.99023314Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.071479ms
Feb  2 04:42:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:01.999457126Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.00071206Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=1.256673ms
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.0040962Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.005647221Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.551671ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.008940869Z level=info msg="Executing migration" id="create user auth table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.010748077Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.805488ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.013251354Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.015249318Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.997303ms
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.017272871Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.017378244Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=105.963µs
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.pxmjnp on compute-0
Feb  2 04:42:02 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.pxmjnp on compute-0
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.021813243Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.029875208Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.061355ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.032262552Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.040598824Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.335443ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.101260103Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.110935951Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=9.677588ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.114854966Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.12511493Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=10.257793ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.194049939Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.19598556Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.940462ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.19860684Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.206398118Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.790758ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.209590274Z level=info msg="Executing migration" id="create server_lock table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.210939099Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.344696ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.215274465Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.217370501Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=2.096536ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.221307836Z level=info msg="Executing migration" id="create user auth token table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.222841967Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.533921ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.226786562Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.228316673Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.529231ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.232020532Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.233607684Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.587682ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.236682126Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.238635389Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.950992ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.241913256Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.250599228Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.668101ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.254298447Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.255969101Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.670564ms
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb  2 04:42:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:42:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:02.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.326419302Z level=info msg="Executing migration" id="create cache_data table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.328586499Z level=info msg="Migration successfully executed" id="create cache_data table" duration=2.168338ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.33197346Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.333874191Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.90033ms
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.33835487Z level=info msg="Executing migration" id="create short_url table v1"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.340113017Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.757377ms
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.350801762Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.353244287Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.442925ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:02 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb  2 04:42:02 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.523237043Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.523373337Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=140.084µs
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.464819115 +0000 UTC m=+0.022120511 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.674193752Z level=info msg="Executing migration" id="delete alert_definition table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.675371644Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=1.203012ms
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.678140398 +0000 UTC m=+0.235441764 container create 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.680425959Z level=info msg="Executing migration" id="recreate alert_definition table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.681665792Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.240813ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.686709356Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.687876598Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.167112ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.690623501Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.69170401Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.080339ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.694860994Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.694946606Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=87.222µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.697659319Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.698873801Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.212493ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.701621704Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.702763745Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.142671ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.705738934Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.706861664Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.12249ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.709190356Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.71006896Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=878.024µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.712092304Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.716100301Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.007557ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.718360221Z level=info msg="Executing migration" id="drop alert_definition table"
Feb  2 04:42:02 np0005604790 systemd[1]: Started libpod-conmon-8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5.scope.
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.719112881Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=752.58µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.720927859Z level=info msg="Executing migration" id="delete alert_definition_version table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.721041532Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=114.353µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.723395035Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.724659219Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.264194ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.727639729Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.728739858Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.0996ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.732319143Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.733361591Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.042048ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.735823167Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.735898949Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=76.942µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.738977201Z level=info msg="Executing migration" id="drop alert_definition_version table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.739992268Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.014497ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.74305037Z level=info msg="Executing migration" id="create alert_instance table"
Feb  2 04:42:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.744288333Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.236903ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.751322161Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.753279133Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.955912ms
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.75691266 +0000 UTC m=+0.314214076 container init 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.758634506Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.759703454Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.068928ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.76329882Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.764784 +0000 UTC m=+0.322085386 container start 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.expose-services=, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Feb  2 04:42:02 np0005604790 angry_moser[99604]: 0 0
Feb  2 04:42:02 np0005604790 systemd[1]: libpod-8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5.scope: Deactivated successfully.
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.769391833Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.092353ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.772216208Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.772609539 +0000 UTC m=+0.329910965 container attach 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=, vcs-type=git, com.redhat.component=keepalived-container, name=keepalived)
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.773259826 +0000 UTC m=+0.330561212 container died 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, distribution-scope=public, io.openshift.expose-services=)
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.7741382Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.894401ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.777353145Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.779097122Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.744537ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.784443475Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Feb  2 04:42:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2e9863e25e20aa0eac11e7a50cbf2a9934b0988ec7bbbd1143603435da5a581a-merged.mount: Deactivated successfully.
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.822278834Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=37.830809ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.826580709Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Feb  2 04:42:02 np0005604790 podman[99588]: 2026-02-02 09:42:02.827532624 +0000 UTC m=+0.384833990 container remove 8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5 (image=quay.io/ceph/keepalived:2.2.4, name=angry_moser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Feb  2 04:42:02 np0005604790 systemd[1]: libpod-conmon-8bb0ff00238b89a670cc3a13bd0662ebf49dcf4a08c02b5abe508c8ebeb453b5.scope: Deactivated successfully.
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: Deploying daemon keepalived.rgw.default.compute-0.pxmjnp on compute-0
Feb  2 04:42:02 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.851637278Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=25.055379ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.856298002Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.857443083Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.145451ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.861294996Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.862640051Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.345916ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.864650685Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.869765682Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.113206ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.873703767Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.877697403Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.993426ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.881054453Z level=info msg="Executing migration" id="create alert_rule table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.882045459Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=991.026µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.884636499Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.885554533Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=917.695µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.887424313Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.888253245Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=829.412µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.894001758Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.895056907Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.061559ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.897196574Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.897267926Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=71.392µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.900704957Z level=info msg="Executing migration" id="add column for to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.905626209Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.905611ms
Feb  2 04:42:02 np0005604790 systemd[1]: Reloading.
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.909992775Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.916357135Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.3617ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.918588594Z level=info msg="Executing migration" id="add column labels to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.923105814Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.515581ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.925785176Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.926652149Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=888.754µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.929215567Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.930100801Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=885.184µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.932776752Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.936874381Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.097189ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.939588514Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.943721144Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.12847ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.948170583Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.949208161Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.039198ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.952124438Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.95817467Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.049852ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.960146693Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.96754494Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.396937ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.96942143Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.969529683Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=109.163µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.971069374Z level=info msg="Executing migration" id="create alert_rule_version table"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.972012239Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=942.665µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.973712965Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.975303747Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.589422ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.977524976Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.979861679Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.336493ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.983602629Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.98366885Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=66.911µs
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.985366416Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.991603372Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.233976ms
Feb  2 04:42:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:02.997292744Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Feb  2 04:42:03 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.003328545Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.036541ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.005521844Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Feb  2 04:42:03 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.010475106Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.951653ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.012737376Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.017862373Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=5.122787ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.019796004Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.025100956Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.299402ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.027689575Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.027779138Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=96.742µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.029626397Z level=info msg="Executing migration" id=create_alert_configuration_table
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.030474689Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=847.842µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.032242537Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.038171195Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.925918ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.040089806Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.040167138Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=78.282µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.041999967Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.046813066Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.812338ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.048717016Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.04960092Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=883.524µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.051760678Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.05671963Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.954082ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.077741951Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.078796939Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.057958ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.109905119Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.111100831Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.199272ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.127095238Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.132059851Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.961642ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.176172728Z level=info msg="Executing migration" id="create provenance_type table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.178258593Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=2.087195ms
Feb  2 04:42:03 np0005604790 systemd[1]: Reloading.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.196376587Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.198155345Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.779117ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.208290895Z level=info msg="Executing migration" id="create alert_image table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.209770575Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.479129ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.218875068Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.220597963Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.723896ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.222736351Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.222861554Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=125.904µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.225068813Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.226632195Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.567671ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.229063109Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.230719024Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.655934ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.233643512Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.234380151Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.236777355Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.237683199Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=906.044µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.240087514Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.242863118Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.774784ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.246634078Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.271229595Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=24.664259ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.27331507Z level=info msg="Executing migration" id="create library_element table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.27478404Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.466479ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.278003605Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.279606918Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.603583ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.28229147Z level=info msg="Executing migration" id="create library_element_connection table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.28377635Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.48449ms
Feb  2 04:42:03 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.287513699Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.289196394Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.683315ms
Feb  2 04:42:03 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.293342225Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.294965198Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.623243ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.29728213Z level=info msg="Executing migration" id="increase max description length to 2048"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.297319681Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=39.611µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.299659083Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.299754266Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=77.282µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.304757339Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.305098138Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=341.929µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.308265523Z level=info msg="Executing migration" id="create data_keys table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.309162717Z level=info msg="Migration successfully executed" id="create data_keys table" duration=897.054µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.310658937Z level=info msg="Executing migration" id="create secrets table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.311258013Z level=info msg="Migration successfully executed" id="create secrets table" duration=599.136µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.31338096Z level=info msg="Executing migration" id="rename data_keys name column to id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.338778997Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=25.395538ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.340421211Z level=info msg="Executing migration" id="add name column into data_keys"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.345301321Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.87975ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.347625103Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.347752947Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=127.544µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.349437402Z level=info msg="Executing migration" id="rename data_keys name column to label"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.374287865Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=24.849593ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.376568176Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.400973377Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=24.403661ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.402528789Z level=info msg="Executing migration" id="create kv_store table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.403168556Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=639.757µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.405151489Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.40595817Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=806.601µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.408109108Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.408317713Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=208.635µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.41007317Z level=info msg="Executing migration" id="create permission table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.410759938Z level=info msg="Migration successfully executed" id="create permission table" duration=685.598µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.412985158Z level=info msg="Executing migration" id="add unique index permission.role_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.413797889Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=812.241µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.417975841Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.418837974Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=862.103µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.420707774Z level=info msg="Executing migration" id="create role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.421414513Z level=info msg="Migration successfully executed" id="create role table" duration=706.039µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.423017106Z level=info msg="Executing migration" id="add column display_name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.428035859Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.016904ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.429982221Z level=info msg="Executing migration" id="add column group_name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.434864592Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.882021ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.43666362Z level=info msg="Executing migration" id="add index role.org_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.437469521Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=805.471µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.439182927Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.440024749Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=841.062µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.442695431Z level=info msg="Executing migration" id="add index role_org_id_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.443532753Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=837.202µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.445425684Z level=info msg="Executing migration" id="create team role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.446215005Z level=info msg="Migration successfully executed" id="create team role table" duration=786.801µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.447596581Z level=info msg="Executing migration" id="add index team_role.org_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.448415373Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=818.702µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.453087518Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.454787103Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.923491ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.456800287Z level=info msg="Executing migration" id="add index team_role.team_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.457948678Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.148341ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.459440488Z level=info msg="Executing migration" id="create user role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.46028017Z level=info msg="Migration successfully executed" id="create user role table" duration=836.912µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.462181151Z level=info msg="Executing migration" id="add index user_role.org_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.463001493Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=820.352µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.466624479Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.46739818Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=773.941µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.469112366Z level=info msg="Executing migration" id="add index user_role.user_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.469913107Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=800.701µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.471401667Z level=info msg="Executing migration" id="create builtin role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.472057594Z level=info msg="Migration successfully executed" id="create builtin role table" duration=656.197µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.473645117Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.474429738Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=783.951µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.475818615Z level=info msg="Executing migration" id="add index builtin_role.name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.476614256Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=795.461µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.478056964Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.483396117Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.338893ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.484773584Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.485590696Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=816.921µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.48687793Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Feb  2 04:42:03 np0005604790 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.pxmjnp for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.487706392Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=828.172µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.489319325Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.490107796Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=788.321µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.491541684Z level=info msg="Executing migration" id="add unique index role.uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.492323935Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=781.951µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.493899067Z level=info msg="Executing migration" id="create seed assignment table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.494530494Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=629.147µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.495825939Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.496655271Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=828.992µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.498024477Z level=info msg="Executing migration" id="add column hidden to role table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.503426981Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.402204ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.504810108Z level=info msg="Executing migration" id="permission kind migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.510153321Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.342803ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.511530568Z level=info msg="Executing migration" id="permission attribute migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.516712186Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.181788ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.518306039Z level=info msg="Executing migration" id="permission identifier migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.523562969Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.25682ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.524990847Z level=info msg="Executing migration" id="add permission identifier index"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.525832689Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=841.812µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.5292402Z level=info msg="Executing migration" id="add permission action scope role_id index"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.530128744Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=886.174µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.532084506Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.532997451Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=912.795µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.534646845Z level=info msg="Executing migration" id="create query_history table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.535311012Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=663.927µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.53708064Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.537935592Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=854.632µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.541713153Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.541764455Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=51.862µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.543055189Z level=info msg="Executing migration" id="rbac disabled migrator"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.54308543Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=30.371µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.54460071Z level=info msg="Executing migration" id="teams permissions migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.544909869Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=309.338µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.5464468Z level=info msg="Executing migration" id="dashboard permissions"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.546877341Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=430.901µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.549370448Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.549867951Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=497.073µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.551427673Z level=info msg="Executing migration" id="drop managed folder create actions"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.551588167Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=160.294µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.553449466Z level=info msg="Executing migration" id="alerting notification permissions"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.553801916Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=352.42µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.555291196Z level=info msg="Executing migration" id="create query_history_star table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.555883621Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=591.845µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.55731648Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.558113221Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=796.541µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.559407346Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.564861121Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=5.455676ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.566426523Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.566475064Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=46.841µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.569385562Z level=info msg="Executing migration" id="create correlation table v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.570178713Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=792.781µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.571919909Z level=info msg="Executing migration" id="add index correlations.uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.572754062Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=834.133µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.574332564Z level=info msg="Executing migration" id="add index correlations.source_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.575101794Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=768.93µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.577014915Z level=info msg="Executing migration" id="add correlation config column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.583231221Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.215606ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.584988398Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.585904783Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=915.935µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.587336981Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.588103551Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=766.73µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.589650163Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.605937097Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=16.280764ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.607693254Z level=info msg="Executing migration" id="create correlation v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.608619219Z level=info msg="Migration successfully executed" id="create correlation v2" duration=923.215µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.61016559Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.610945931Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=780.031µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.612580785Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.613415787Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=834.812µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.614893746Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.615844522Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=950.226µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.617446905Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.61764933Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=202.155µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.619336115Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.619964372Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=627.787µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.622568811Z level=info msg="Executing migration" id="add provisioning column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.628188701Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.61823ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.629799194Z level=info msg="Executing migration" id="create entity_events table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.63039883Z level=info msg="Migration successfully executed" id="create entity_events table" duration=599.196µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.63187189Z level=info msg="Executing migration" id="create dashboard public config v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.63263591Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=762.801µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.634222172Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.634551621Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.635928848Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.636220586Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.637422928Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.638020304Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=597.086µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.640278614Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.641070065Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=790.821µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.642476083Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.643262074Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=785.06µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.646440588Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.647221499Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=780.471µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.648894114Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.649768777Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=876.063µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.651074752Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.651801831Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=727.469µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.653909348Z level=info msg="Executing migration" id="Drop public config table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.655313005Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.404848ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.657259937Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.658651634Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.391667ms
Feb  2 04:42:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 3 op/s; 2 B/s, 0 objects/s recovering
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.661018537Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.662382204Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.365837ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.664012447Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.665287711Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.277974ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.667135341Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.668547788Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.412057ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.670393908Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Feb  2 04:42:03 np0005604790 podman[99748]: 2026-02-02 09:42:03.673934902 +0000 UTC m=+0.036304730 container create 860a76d66e8eda9eaa418ca2983e711afeb6dee68d75c8d8ff31ac03764f810e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp, name=keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.692266071Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=21.868824ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.694028628Z level=info msg="Executing migration" id="add annotations_enabled column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.699775622Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=5.746754ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.701132708Z level=info msg="Executing migration" id="add time_selection_enabled column"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.706899252Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.765234ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.708539676Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.708739541Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=177.334µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.710180329Z level=info msg="Executing migration" id="add share column"
Feb  2 04:42:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2031b9cceda1de38fb4d7d67acd9ce0d3f4db6123254c89b857c25e37187fdd/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.717738881Z level=info msg="Migration successfully executed" id="add share column" duration=7.557232ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.719519859Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.719674603Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=155.354µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.721191633Z level=info msg="Executing migration" id="create file table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.721927773Z level=info msg="Migration successfully executed" id="create file table" duration=736.01µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.725833957Z level=info msg="Executing migration" id="file table idx: path natural pk"
Feb  2 04:42:03 np0005604790 podman[99748]: 2026-02-02 09:42:03.726055433 +0000 UTC m=+0.088425281 container init 860a76d66e8eda9eaa418ca2983e711afeb6dee68d75c8d8ff31ac03764f810e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.726608508Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=774.041µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.728045936Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.728841107Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=794.851µs
Feb  2 04:42:03 np0005604790 podman[99748]: 2026-02-02 09:42:03.729046653 +0000 UTC m=+0.091416481 container start 860a76d66e8eda9eaa418ca2983e711afeb6dee68d75c8d8ff31ac03764f810e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.expose-services=)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.730733338Z level=info msg="Executing migration" id="create file_meta table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.731314113Z level=info msg="Migration successfully executed" id="create file_meta table" duration=579.695µs
Feb  2 04:42:03 np0005604790 bash[99748]: 860a76d66e8eda9eaa418ca2983e711afeb6dee68d75c8d8ff31ac03764f810e
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.736966384Z level=info msg="Executing migration" id="file table idx: path key"
Feb  2 04:42:03 np0005604790 podman[99748]: 2026-02-02 09:42:03.661638384 +0000 UTC m=+0.024008232 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.737992102Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.026638ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Starting Keepalived v2.2.4 (08/21,2021)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Configuration file /etc/keepalived/keepalived.conf
Feb  2 04:42:03 np0005604790 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.pxmjnp for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.740597881Z level=info msg="Executing migration" id="set path collation in file table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.740673353Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=77.012µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.742243655Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.742337498Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=95.443µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Starting VRRP child process, pid=4
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: Startup complete
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:42:03 2026: (VI_0) Entering BACKUP STATE
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: (VI_0) Entering BACKUP STATE (init)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.745175793Z level=info msg="Executing migration" id="managed permissions migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.745858522Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=682.538µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.749630132Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:03 2026: VRRP_Script(check_backend) succeeded
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.749886909Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=261.937µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.751437491Z level=info msg="Executing migration" id="RBAC action name migrator"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.752566181Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.128781ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.754244795Z level=info msg="Executing migration" id="Add UID column to playlist"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.760326908Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=6.079353ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.761988052Z level=info msg="Executing migration" id="Update uid column values in playlist"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.762124496Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=136.564µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.763592405Z level=info msg="Executing migration" id="Add index for uid in playlist"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.76452736Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=934.545µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.765938947Z level=info msg="Executing migration" id="update group index for alert rules"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.766259726Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=321.259µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.768447134Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.768627929Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=180.805µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.770363706Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.770783787Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=419.382µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.775673897Z level=info msg="Executing migration" id="add action column to seed_assignment"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.781356119Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=5.682222ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.783236229Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.789728262Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.490503ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.791648144Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.792511037Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=862.663µs
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:03 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.794285854Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev ebe0ecfa-b939-4ce8-9579-137d3d63efb5 (Updating ingress.rgw.default deployment (+2 -> 4))
Feb  2 04:42:03 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event ebe0ecfa-b939-4ce8-9579-137d3d63efb5 (Updating ingress.rgw.default deployment (+2 -> 4)) in 4 seconds
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mgr[74785]: [progress INFO root] update: starting ev ca252ac6-5c84-4274-8a4a-c34307616861 (Updating prometheus deployment (+1 -> 1))
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.863228924Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=68.93751ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.908517403Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.909831378Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.348026ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.912043237Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.913089135Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.043007ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.915566331Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.936968052Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=21.36913ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.954079789Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.963037738Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=8.946459ms
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.966721516Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.967077795Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=359.839µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.976425545Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.976827686Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=407.131µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.981636644Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.98222286Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=592.486µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.988024974Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.988437735Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=419.001µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.990621164Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.990879161Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=258.107µs
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.992552565Z level=info msg="Executing migration" id="create folder table"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.993808119Z level=info msg="Migration successfully executed" id="create folder table" duration=1.255074ms
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.99571715Z level=info msg="Executing migration" id="Add index for parent_uid"
Feb  2 04:42:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:03.997451986Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.734416ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.003744404Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.004864864Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.12002ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.022169046Z level=info msg="Executing migration" id="Update folder title length"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.022199867Z level=info msg="Migration successfully executed" id="Update folder title length" duration=32.06µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.02608672Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.028465974Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.377444ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.038506332Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.039640072Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.13342ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.042085187Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.043271359Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.186122ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.045728034Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.046391882Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=662.488µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.04894981Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.049235958Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=286.018µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.053283776Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.055469724Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.187128ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.062777329Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.064005552Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.228273ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.065640296Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.066738265Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.098149ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.068356448Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.06955617Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.198872ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.073042543Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.074157103Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.11365ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.082562327Z level=info msg="Executing migration" id="create anon_device table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.084128789Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.566712ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.086558974Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.088691611Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.132227ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.092467272Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.094553757Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.086065ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.0965281Z level=info msg="Executing migration" id="create signing_key table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.098207795Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.682125ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.101071501Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.103019433Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.947142ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.10961998Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.110547694Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=927.984µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.112785374Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.11302489Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=239.486µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.114589782Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.121006303Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.416311ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.12276318Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.123392037Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=629.687µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.125075362Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.125954806Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=879.134µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.129430968Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.13026307Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=832.132µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.134137594Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.134940665Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=802.741µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.143160105Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.144058579Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=897.934µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.146427402Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.147246894Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=818.582µs
Feb  2 04:42:04 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Feb  2 04:42:04 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.155128194Z level=info msg="Executing migration" id="create sso_setting table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.155897495Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=769.901µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.178734204Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.180034279Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.296895ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.191300719Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.191559226Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=260.237µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.194811463Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.194866455Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=54.802µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.202780166Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.209621998Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=6.842072ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.220422037Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.227564867Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.14301ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.233069904Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.233374782Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=304.548µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=migrator t=2026-02-02T09:42:04.236632599Z level=info msg="migrations completed" performed=547 skipped=0 duration=5.660638117s
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore t=2026-02-02T09:42:04.238066428Z level=info msg="Created default organization"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=secrets t=2026-02-02T09:42:04.246350939Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=plugin.store t=2026-02-02T09:42:04.2771272Z level=info msg="Loading plugins..."
Feb  2 04:42:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:04.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=local.finder t=2026-02-02T09:42:04.335815586Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=plugin.store t=2026-02-02T09:42:04.335930039Z level=info msg="Plugins loaded" count=55 duration=58.804969ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=query_data t=2026-02-02T09:42:04.33857349Z level=info msg="Query Service initialization"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=live.push_http t=2026-02-02T09:42:04.342269259Z level=info msg="Live Push Gateway initialization"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-02-02T09:42:04.353272892Z level=info msg=Starting
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-02-02T09:42:04.354191417Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:42:04 2026: (VI_0) Entering MASTER STATE
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.migration orgID=1 t=2026-02-02T09:42:04.355904572Z level=info msg="Migrating alerts for organisation"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.migration orgID=1 t=2026-02-02T09:42:04.360025172Z level=info msg="Alerts found to migrate" alerts=0
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-02-02T09:42:04.364013809Z level=info msg="Completed alerting migration"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-02-02T09:42:04.407051287Z level=info msg="Running in alternative execution of Error/NoData mode"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=infra.usagestats.collector t=2026-02-02T09:42:04.410177051Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=provisioning.datasources t=2026-02-02T09:42:04.412180434Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=provisioning.alerting t=2026-02-02T09:42:04.433114583Z level=info msg="starting to provision alerting"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=provisioning.alerting t=2026-02-02T09:42:04.433162954Z level=info msg="finished to provision alerting"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-02-02T09:42:04.433344959Z level=info msg="Warming state cache for startup"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.multiorg.alertmanager t=2026-02-02T09:42:04.433698179Z level=info msg="Starting MultiOrg Alertmanager"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-02-02T09:42:04.433776561Z level=info msg="State cache has been initialized" states=0 duration=430.632µs
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ngalert.scheduler t=2026-02-02T09:42:04.433808441Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ticker t=2026-02-02T09:42:04.433904324Z level=info msg=starting first_tick=2026-02-02T09:42:10Z
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=grafanaStorageLogger t=2026-02-02T09:42:04.435365723Z level=info msg="Storage starting"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=http.server t=2026-02-02T09:42:04.439345159Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=http.server t=2026-02-02T09:42:04.440010827Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=provisioning.dashboard t=2026-02-02T09:42:04.498931259Z level=info msg="starting to provision dashboards"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=plugins.update.checker t=2026-02-02T09:42:04.506035269Z level=info msg="Update check succeeded" duration=72.207267ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=grafana.update.checker t=2026-02-02T09:42:04.530139222Z level=info msg="Update check succeeded" duration=96.356451ms
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-02-02T09:42:04.555684584Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-02-02T09:42:04.571126096Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=provisioning.dashboard t=2026-02-02T09:42:04.776866687Z level=info msg="finished to provision dashboards"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-02-02T09:42:04.799165052Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Feb  2 04:42:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-02-02T09:42:04.806748634Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Feb  2 04:42:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb  2 04:42:04 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb  2 04:42:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb  2 04:42:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb  2 04:42:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:05 2026: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Feb  2 04:42:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko[98163]: Mon Feb  2 09:42:05 2026: (VI_0) received an invalid passwd!
Feb  2 04:42:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 keys/s, 2 objects/s recovering
Feb  2 04:42:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb  2 04:42:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Feb  2 04:42:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:05 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: Deploying daemon prometheus.compute-0 on compute-0
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Feb  2 04:42:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb  2 04:42:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:06.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:06 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb  2 04:42:06 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb  2 04:42:06 np0005604790 ceph-mgr[74785]: [progress INFO root] Writing back 25 completed events
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb  2 04:42:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb  2 04:42:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-rgw-default-compute-0-pxmjnp[99763]: Mon Feb  2 09:42:07 2026: (VI_0) Entering MASTER STATE
Feb  2 04:42:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Feb  2 04:42:07 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Feb  2 04:42:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb  2 04:42:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.793679199 +0000 UTC m=+3.104405811 volume create 08e5bed1a3bb45ef20ca488da8e4669199ac3d825dd5252e564cb134b3b11756
Feb  2 04:42:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:07 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.80609919 +0000 UTC m=+3.116825812 container create 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.777306432 +0000 UTC m=+3.088033094 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb  2 04:42:07 np0005604790 systemd[1]: Started libpod-conmon-3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b.scope.
Feb  2 04:42:07 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:07 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421599d994f2813454403128bb347ce246156ca7deebec62fd8efeee3a340666/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.890224655 +0000 UTC m=+3.200951307 container init 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.897576301 +0000 UTC m=+3.208302953 container start 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.900651223 +0000 UTC m=+3.211377865 container attach 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 charming_fermat[100119]: 65534 65534
Feb  2 04:42:07 np0005604790 systemd[1]: libpod-3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b.scope: Deactivated successfully.
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.902233276 +0000 UTC m=+3.212959918 container died 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-421599d994f2813454403128bb347ce246156ca7deebec62fd8efeee3a340666-merged.mount: Deactivated successfully.
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.939917141 +0000 UTC m=+3.250643783 container remove 3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b (image=quay.io/prometheus/prometheus:v2.51.0, name=charming_fermat, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:07 np0005604790 podman[99865]: 2026-02-02 09:42:07.943569409 +0000 UTC m=+3.254296061 volume remove 08e5bed1a3bb45ef20ca488da8e4669199ac3d825dd5252e564cb134b3b11756
Feb  2 04:42:07 np0005604790 systemd[1]: libpod-conmon-3f0fb627570921d1f2a44ccd6de32c43b3e8f4eddd6f0adfc3982d579c8c492b.scope: Deactivated successfully.
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.016910226 +0000 UTC m=+0.045919546 volume create 0badf483a80bc32af68dccb309fd6aedbe1daf7970f3851ff106762d033655fd
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.026210274 +0000 UTC m=+0.055219594 container create 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 systemd[1]: Started libpod-conmon-858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d.scope.
Feb  2 04:42:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:07.997461007 +0000 UTC m=+0.026470377 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb  2 04:42:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96b51eea416aadd11720191a9d501182d839f4bc7ea7995b0a3bbfefb984a0f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.122422352 +0000 UTC m=+0.151431682 container init 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.12648329 +0000 UTC m=+0.155492600 container start 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 epic_lehmann[100150]: 65534 65534
Feb  2 04:42:08 np0005604790 systemd[1]: libpod-858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d.scope: Deactivated successfully.
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.129923342 +0000 UTC m=+0.158932682 container attach 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.130120107 +0000 UTC m=+0.159129417 container died 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e96b51eea416aadd11720191a9d501182d839f4bc7ea7995b0a3bbfefb984a0f-merged.mount: Deactivated successfully.
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.163332114 +0000 UTC m=+0.192341424 container remove 858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d (image=quay.io/prometheus/prometheus:v2.51.0, name=epic_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:08 np0005604790 podman[100133]: 2026-02-02 09:42:08.167083254 +0000 UTC m=+0.196092564 volume remove 0badf483a80bc32af68dccb309fd6aedbe1daf7970f3851ff106762d033655fd
Feb  2 04:42:08 np0005604790 systemd[1]: libpod-conmon-858cd69a1decff853638fffde43a0f923974262a7d3a4a1070b591d3359d9a5d.scope: Deactivated successfully.
Feb  2 04:42:08 np0005604790 systemd[1]: Reloading.
Feb  2 04:42:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb  2 04:42:08 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Feb  2 04:42:08 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:42:08 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:42:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 04:42:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb  2 04:42:08 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb  2 04:42:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:08.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:08.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:08 np0005604790 systemd[1]: Reloading.
Feb  2 04:42:08 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:42:08 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:42:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb  2 04:42:08 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb  2 04:42:08 np0005604790 systemd[1]: Starting Ceph prometheus.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:09 np0005604790 podman[100290]: 2026-02-02 09:42:09.008228772 +0000 UTC m=+0.068420947 container create 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:09 np0005604790 podman[100290]: 2026-02-02 09:42:08.967261169 +0000 UTC m=+0.027453394 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Feb  2 04:42:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a577b7095a7f5d4d2eb851e1ce156d49c6bfa8a218830d94508f667ed7c5b7fa/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a577b7095a7f5d4d2eb851e1ce156d49c6bfa8a218830d94508f667ed7c5b7fa/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:09 np0005604790 podman[100290]: 2026-02-02 09:42:09.124709291 +0000 UTC m=+0.184901446 container init 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:09 np0005604790 podman[100290]: 2026-02-02 09:42:09.129648943 +0000 UTC m=+0.189841088 container start 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:09 np0005604790 bash[100290]: 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.162Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Feb  2 04:42:09 np0005604790 systemd[1]: Started Ceph prometheus.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.162Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.162Z caller=main.go:623 level=info host_details="(Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 x86_64 compute-0 (none))"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.162Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.162Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.164Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.165Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.170Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.171Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.172Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.172Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.441µs
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.172Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.173Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.173Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=42.971µs wal_replay_duration=476.032µs wbl_replay_duration=210ns total_replay_duration=560.085µs
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.177Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.177Z caller=main.go:1153 level=info msg="TSDB started"
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.177Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.212Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=34.312235ms db_storage=1.861µs remote_storage=2.01µs web_handler=440ns query_engine=910ns scrape=2.706952ms scrape_sd=313.528µs notify=36.421µs notify_sd=25.191µs rules=30.609447ms tracing=15.27µs
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.212Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0[100305]: ts=2026-02-02T09:42:09.212Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:09 np0005604790 ceph-mgr[74785]: [progress INFO root] complete: finished ev ca252ac6-5c84-4274-8a4a-c34307616861 (Updating prometheus deployment (+1 -> 1))
Feb  2 04:42:09 np0005604790 ceph-mgr[74785]: [progress INFO root] Completed event ca252ac6-5c84-4274-8a4a-c34307616861 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Feb  2 04:42:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb  2 04:42:09 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb  2 04:42:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 213 B/s, 9 objects/s recovering
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb  2 04:42:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Feb  2 04:42:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:09 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700030f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Feb  2 04:42:10 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Feb  2 04:42:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb  2 04:42:10 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=71) [1] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=71) [1] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=71) [1] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=71) [1] r=0 lpr=71 pi=[57,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.djvyfo(active, since 83s), standbys: compute-1.teascl, compute-2.gzlyac
Feb  2 04:42:11 np0005604790 systemd[1]: session-35.scope: Deactivated successfully.
Feb  2 04:42:11 np0005604790 systemd[1]: session-35.scope: Consumed 43.279s CPU time.
Feb  2 04:42:11 np0005604790 systemd-logind[793]: Session 35 logged out. Waiting for processes to exit.
Feb  2 04:42:11 np0005604790 systemd-logind[793]: Removed session 35.
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb  2 04:42:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setuser ceph since I am not root
Feb  2 04:42:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ignoring --setgroup ceph since I am not root
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: pidfile_write: ignore empty --pid-file
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=72) [1]/[0] r=-1 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'alerts'
Feb  2 04:42:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:11.412+0000 7f3917aaf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'balancer'
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb  2 04:42:11 np0005604790 ceph-mon[74489]: from='mgr.14514 192.168.122.100:0/3917148228' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Feb  2 04:42:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:11.489+0000 7f3917aaf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb  2 04:42:11 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'cephadm'
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb  2 04:42:11 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb  2 04:42:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:11 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'crash'
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:12.225+0000 7f3917aaf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'dashboard'
Feb  2 04:42:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb  2 04:42:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb  2 04:42:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb  2 04:42:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb  2 04:42:12 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'devicehealth'
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:12.833+0000 7f3917aaf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'diskprediction_local'
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]:  from numpy import show_config as show_numpy_config
Feb  2 04:42:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:12.978+0000 7f3917aaf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb  2 04:42:12 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'influx'
Feb  2 04:42:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:13.040+0000 7f3917aaf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'insights'
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'iostat'
Feb  2 04:42:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:13.162+0000 7f3917aaf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'k8sevents'
Feb  2 04:42:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb  2 04:42:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb  2 04:42:13 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.6( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.6( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.e( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 74 pg[9.e( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'localpool'
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mds_autoscaler'
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.f scrub starts
Feb  2 04:42:13 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.f scrub ok
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'mirroring'
Feb  2 04:42:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:13 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700030f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:13 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'nfs'
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.092+0000 7f3917aaf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'orchestrator'
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.285+0000 7f3917aaf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_perf_query'
Feb  2 04:42:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb  2 04:42:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb  2 04:42:14 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb  2 04:42:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 75 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 75 pg[9.e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 75 pg[9.6( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=6 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 75 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=72/57 les/c/f=73/58/0 sis=74) [1] r=0 lpr=74 pi=[57,74)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.396+0000 7f3917aaf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'osd_support'
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.461+0000 7f3917aaf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'pg_autoscaler'
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.534+0000 7f3917aaf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'progress'
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.610+0000 7f3917aaf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'prometheus'
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb  2 04:42:14 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094214 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:42:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:14.950+0000 7f3917aaf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb  2 04:42:14 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rbd_support'
Feb  2 04:42:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:15.044+0000 7f3917aaf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'restful'
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rgw'
Feb  2 04:42:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:15.442+0000 7f3917aaf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'rook'
Feb  2 04:42:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb  2 04:42:15 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb  2 04:42:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:15 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:15.961+0000 7f3917aaf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb  2 04:42:15 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'selftest'
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.027+0000 7f3917aaf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'snap_schedule'
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.102+0000 7f3917aaf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'stats'
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'status'
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.241+0000 7f3917aaf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telegraf'
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.305+0000 7f3917aaf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'telemetry'
Feb  2 04:42:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:42:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:42:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003db0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.450+0000 7f3917aaf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'test_orchestrator'
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.650+0000 7f3917aaf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'volumes'
Feb  2 04:42:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.d scrub starts
Feb  2 04:42:16 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.d scrub ok
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl restarted
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.teascl started
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.900+0000 7f3917aaf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Loading python module 'zabbix'
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac restarted
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.gzlyac started
Feb  2 04:42:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:16.965+0000 7f3917aaf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Active manager daemon compute-0.djvyfo restarted
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.djvyfo
Feb  2 04:42:16 np0005604790 ceph-mgr[74785]: ms_deliver_dispatch: unhandled message 0x56324984b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb  2 04:42:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map Activating!
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr handle_mgr_map I am now activating
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.djvyfo(active, starting, since 0.0413396s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.clmmzw"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.clmmzw"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 all = 0
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.vvohrf"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vvohrf"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 all = 0
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.khfsen"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.khfsen"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 all = 0
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-0.djvyfo", "id": "compute-0.djvyfo"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gzlyac", "id": "compute-2.gzlyac"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr metadata", "who": "compute-1.teascl", "id": "compute-1.teascl"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mds metadata"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).mds e11 all = 1
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mon metadata"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: balancer
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Manager daemon compute-0.djvyfo is now available
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:42:17
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: cephadm
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: crash
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: dashboard
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO access_control] Loading user roles DB version=2
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO sso] Loading SSO DB version=1
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO root] Configured CherryPy, starting engine...
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: devicehealth
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: iostat
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: nfs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: orchestrator
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: pg_autoscaler
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: progress
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [progress INFO root] Loading...
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f389de78100>, <progress.module.GhostEvent object at 0x7f38955ce970>, <progress.module.GhostEvent object at 0x7f38955ce9a0>, <progress.module.GhostEvent object at 0x7f38955ce9d0>, <progress.module.GhostEvent object at 0x7f38955cea00>, <progress.module.GhostEvent object at 0x7f38955cea30>, <progress.module.GhostEvent object at 0x7f38955cea60>, <progress.module.GhostEvent object at 0x7f38955cea90>, <progress.module.GhostEvent object at 0x7f38955ceac0>, <progress.module.GhostEvent object at 0x7f38955ceaf0>, <progress.module.GhostEvent object at 0x7f38955ceb20>, <progress.module.GhostEvent object at 0x7f38955ceb50>, <progress.module.GhostEvent object at 0x7f38955ceb80>, <progress.module.GhostEvent object at 0x7f38955cebb0>, <progress.module.GhostEvent object at 0x7f38955cebe0>, <progress.module.GhostEvent object at 0x7f38955cec10>, <progress.module.GhostEvent object at 0x7f38955cec40>, <progress.module.GhostEvent object at 0x7f38955cec70>, <progress.module.GhostEvent object at 0x7f38955ceca0>, <progress.module.GhostEvent object at 0x7f38955cecd0>, <progress.module.GhostEvent object at 0x7f38955ced00>, <progress.module.GhostEvent object at 0x7f38955ced30>, <progress.module.GhostEvent object at 0x7f38955ced60>, <progress.module.GhostEvent object at 0x7f38955ced90>, <progress.module.GhostEvent object at 0x7f38955cedc0>] historic events
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [progress INFO root] Loaded OSDMap, ready.
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: prometheus
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO root] server_addr: :: server_port: 9283
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO root] Cache enabled
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO root] starting metric collection thread
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO root] Starting engine...
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:17] ENGINE Bus STARTING
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:17] ENGINE Bus STARTING
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: CherryPy Checker:
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: The Application mounted at '' has an empty config.
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] recovery thread starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] starting setup
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: rbd_support
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: restful
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [restful INFO root] server_addr: :: server_port: 8003
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: status
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: telemetry
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [restful WARNING root] server not running: no certificate configured
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] PerfHandler: starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_task_task: images, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TaskHandler: starting
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"} v 0)
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: mgr load Constructed class from module: volumes
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] setup complete
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.273+0000 7f3885b07640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.277+0000 7f387c174640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.277+0000 7f387c174640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.277+0000 7f387c174640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.277+0000 7f387c174640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T09:42:17.277+0000 7f387c174640 -1 client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: client.0 error registering admin socket command: (17) File exists
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:17] ENGINE Serving on http://:::9283
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:17] ENGINE Bus STARTED
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:17] ENGINE Serving on http://:::9283
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:17] ENGINE Bus STARTED
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [prometheus INFO root] Engine started.
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Feb  2 04:42:17 np0005604790 systemd-logind[793]: New session 37 of user ceph-admin.
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Feb  2 04:42:17 np0005604790 systemd[1]: Started Session 37 of User ceph-admin.
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: Active manager daemon compute-0.djvyfo restarted
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: Activating manager daemon compute-0.djvyfo
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: Manager daemon compute-0.djvyfo is now available
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/mirror_snapshot_schedule"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.djvyfo/trash_purge_schedule"}]: dispatch
Feb  2 04:42:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.5 deep-scrub starts
Feb  2 04:42:17 np0005604790 ceph-mgr[74785]: [dashboard INFO dashboard.module] Engine started.
Feb  2 04:42:17 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.5 deep-scrub ok
Feb  2 04:42:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:18 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.djvyfo(active, since 1.0671s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:42:18] ENGINE Bus STARTING
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:42:18] ENGINE Bus STARTING
Feb  2 04:42:18 np0005604790 podman[100657]: 2026-02-02 09:42:18.267438547 +0000 UTC m=+0.075153296 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:42:18] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:42:18] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:42:18] ENGINE Client ('192.168.122.100', 35430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:42:18] ENGINE Client ('192.168.122.100', 35430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:42:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:18 np0005604790 podman[100657]: 2026-02-02 09:42:18.361015765 +0000 UTC m=+0.168730504 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:42:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:42:18] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:42:18] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: [cephadm INFO cherrypy.error] [02/Feb/2026:09:42:18] ENGINE Bus STARTED
Feb  2 04:42:18 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : [02/Feb/2026:09:42:18] ENGINE Bus STARTED
Feb  2 04:42:18 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:42:18] ENGINE Bus STARTING
Feb  2 04:42:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb  2 04:42:18 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb  2 04:42:18 np0005604790 podman[100819]: 2026-02-02 09:42:18.906830281 +0000 UTC m=+0.057944697 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:42:18 np0005604790 podman[100819]: 2026-02-02 09:42:18.94201584 +0000 UTC m=+0.093130216 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:42:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Feb  2 04:42:19 np0005604790 podman[100885]: 2026-02-02 09:42:19.109157211 +0000 UTC m=+0.055631406 container exec 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:19 np0005604790 podman[100885]: 2026-02-02 09:42:19.140722153 +0000 UTC m=+0.087196328 container exec_died 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:19 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 04:42:19 np0005604790 podman[100971]: 2026-02-02 09:42:19.39313211 +0000 UTC m=+0.065714425 container exec 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:19 np0005604790 podman[100971]: 2026-02-02 09:42:19.430194289 +0000 UTC m=+0.102776544 container exec_died 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.djvyfo(active, since 2s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:42:18] ENGINE Serving on https://192.168.122.100:7150
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:42:18] ENGINE Client ('192.168.122.100', 35430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:42:18] ENGINE Serving on http://192.168.122.100:8765
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: [02/Feb/2026:09:42:18] ENGINE Bus STARTED
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:19 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Feb  2 04:42:19 np0005604790 podman[101056]: 2026-02-02 09:42:19.745314429 +0000 UTC m=+0.064386470 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Feb  2 04:42:19 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.772824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025339772861, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 854, "num_deletes": 251, "total_data_size": 2274275, "memory_usage": 2481728, "flush_reason": "Manual Compaction"}
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb  2 04:42:19 np0005604790 podman[101056]: 2026-02-02 09:42:19.784920336 +0000 UTC m=+0.103992327 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb  2 04:42:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:19 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025339813160, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 2243757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7231, "largest_seqno": 8084, "table_properties": {"data_size": 2239090, "index_size": 2123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12500, "raw_average_key_size": 21, "raw_value_size": 2228756, "raw_average_value_size": 3803, "num_data_blocks": 92, "num_entries": 586, "num_filter_entries": 586, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025321, "oldest_key_time": 1770025321, "file_creation_time": 1770025339, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 40392 microseconds, and 4329 cpu microseconds.
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.813212) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 2243757 bytes OK
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.813240) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.827794) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.827817) EVENT_LOG_v1 {"time_micros": 1770025339827810, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.827841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2269534, prev total WAL file size 2297630, number of live WAL files 2.
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.828624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(2191KB)], [20(10MB)]
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025339828758, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13774773, "oldest_snapshot_seqno": -1}
Feb  2 04:42:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3194 keys, 12428316 bytes, temperature: kUnknown
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025340005473, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12428316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12402983, "index_size": 16264, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 82641, "raw_average_key_size": 25, "raw_value_size": 12340043, "raw_average_value_size": 3863, "num_data_blocks": 707, "num_entries": 3194, "num_filter_entries": 3194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770025339, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.005969) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12428316 bytes
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.023768) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.9 rd, 70.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.7) write-amplify(5.5) OK, records in: 3726, records dropped: 532 output_compression: NoCompression
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.023828) EVENT_LOG_v1 {"time_micros": 1770025340023805, "job": 6, "event": "compaction_finished", "compaction_time_micros": 176858, "compaction_time_cpu_micros": 32894, "output_level": 6, "num_output_files": 1, "total_output_size": 12428316, "num_input_records": 3726, "num_output_records": 3194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025340024358, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025340026980, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:19.828375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.027104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.027109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.027112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.027114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:42:20.027116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb  2 04:42:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:20 np0005604790 podman[101122]: 2026-02-02 09:42:20.260087617 +0000 UTC m=+0.076787330 container exec d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:20 np0005604790 podman[101122]: 2026-02-02 09:42:20.328955215 +0000 UTC m=+0.145654848 container exec_died d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:20 np0005604790 podman[101198]: 2026-02-02 09:42:20.559180149 +0000 UTC m=+0.060407563 container exec 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb  2 04:42:20 np0005604790 podman[101198]: 2026-02-02 09:42:20.734718584 +0000 UTC m=+0.235945978 container exec_died 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:20 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Feb  2 04:42:20 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 04:42:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Feb  2 04:42:21 np0005604790 podman[101293]: 2026-02-02 09:42:21.058696669 +0000 UTC m=+0.067295137 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:21 np0005604790 systemd-logind[793]: New session 38 of user zuul.
Feb  2 04:42:21 np0005604790 systemd[1]: Started Session 38 of User zuul.
Feb  2 04:42:21 np0005604790 podman[101293]: 2026-02-02 09:42:21.108909769 +0000 UTC m=+0.117508237 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:21 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb  2 04:42:21 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb  2 04:42:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:21 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb  2 04:42:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb  2 04:42:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:22 np0005604790 python3.9[101570]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.djvyfo(active, since 5s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:42:22 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1a deep-scrub starts
Feb  2 04:42:22 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1a deep-scrub ok
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:42:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb  2 04:42:23 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.conf
Feb  2 04:42:23 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb  2 04:42:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:23 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:23 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:24.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:24 np0005604790 python3.9[102505]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:42:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:24.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Feb  2 04:42:24 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-2:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-1:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: Updating compute-0:/var/lib/ceph/d241d473-9fcb-5f74-b163-f1ca4454e7f1/config/ceph.client.admin.keyring
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:24] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:24] "GET /metrics HTTP/1.1" 200 46582 "" "Prometheus/2.51.0"
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 10 op/s
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.492197169 +0000 UTC m=+0.066477155 container create b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:25 np0005604790 systemd[1]: Started libpod-conmon-b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e.scope.
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.453245009 +0000 UTC m=+0.027525085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:25 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.581135742 +0000 UTC m=+0.155415818 container init b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.592223908 +0000 UTC m=+0.166503924 container start b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.596802221 +0000 UTC m=+0.171082227 container attach b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:25 np0005604790 musing_lederberg[102866]: 167 167
Feb  2 04:42:25 np0005604790 systemd[1]: libpod-b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e.scope: Deactivated successfully.
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.600199701 +0000 UTC m=+0.174479717 container died b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:25 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6f4289d31542bbbd4b133a6c5fa48edcf7717a3b9d2b3a2cf5b9df7b2f2b57d8-merged.mount: Deactivated successfully.
Feb  2 04:42:25 np0005604790 podman[102851]: 2026-02-02 09:42:25.656017801 +0000 UTC m=+0.230297827 container remove b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lederberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:42:25 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Feb  2 04:42:25 np0005604790 systemd[1]: libpod-conmon-b2adb83dee01b3ed8f2e28f52d032bf3a372f412aba29c306d15b04d228db98e.scope: Deactivated successfully.
Feb  2 04:42:25 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb  2 04:42:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 81 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=81) [1] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 81 pg[9.a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=81) [1] r=0 lpr=81 pi=[57,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:25 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:25 np0005604790 podman[102892]: 2026-02-02 09:42:25.824077036 +0000 UTC m=+0.061350618 container create 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:42:25 np0005604790 systemd[1]: Started libpod-conmon-5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32.scope.
Feb  2 04:42:25 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:42:25 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:25 np0005604790 podman[102892]: 2026-02-02 09:42:25.805350026 +0000 UTC m=+0.042623608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:25 np0005604790 podman[102892]: 2026-02-02 09:42:25.933738263 +0000 UTC m=+0.171011865 container init 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:25 np0005604790 podman[102892]: 2026-02-02 09:42:25.941037267 +0000 UTC m=+0.178310839 container start 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:25 np0005604790 podman[102892]: 2026-02-02 09:42:25.956581432 +0000 UTC m=+0.193855014 container attach 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:42:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb  2 04:42:26 np0005604790 hungry_germain[102910]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:42:26 np0005604790 hungry_germain[102910]: --> All data devices are unavailable
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 82 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 82 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 82 pg[9.a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 82 pg[9.a( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=82) [1]/[0] r=-1 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:26 np0005604790 systemd[1]: libpod-5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32.scope: Deactivated successfully.
Feb  2 04:42:26 np0005604790 podman[102892]: 2026-02-02 09:42:26.353528096 +0000 UTC m=+0.590801698 container died 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:42:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e14f33321f37c0e6f5a51a19d15e6ccf1705d028900b5ec74f21c9c5f81eaf5c-merged.mount: Deactivated successfully.
Feb  2 04:42:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:26 np0005604790 podman[102892]: 2026-02-02 09:42:26.397571011 +0000 UTC m=+0.634844583 container remove 5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:26 np0005604790 systemd[1]: libpod-conmon-5c187a13d2cf1ca6c174050ab718d6e2093b48629df98e09cf91743054996a32.scope: Deactivated successfully.
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Feb  2 04:42:26 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:42:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb  2 04:42:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Feb  2 04:42:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:42:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:42:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:42:27 np0005604790 podman[103028]: 2026-02-02 09:42:27.006357159 +0000 UTC m=+0.040897893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb  2 04:42:27 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1 deep-scrub starts
Feb  2 04:42:27 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 12.1 deep-scrub ok
Feb  2 04:42:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:27 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 04:42:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb  2 04:42:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.228082493 +0000 UTC m=+1.262623187 container create 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 04:42:28 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Feb  2 04:42:28 np0005604790 systemd[1]: Started libpod-conmon-42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932.scope.
Feb  2 04:42:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.34600876 +0000 UTC m=+1.380549494 container init 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:28.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.356823589 +0000 UTC m=+1.391364243 container start 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.361602936 +0000 UTC m=+1.396143630 container attach 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:42:28 np0005604790 xenodochial_khorana[103046]: 167 167
Feb  2 04:42:28 np0005604790 systemd[1]: libpod-42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932.scope: Deactivated successfully.
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.363783954 +0000 UTC m=+1.398324628 container died 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Feb  2 04:42:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ae7db8c4560a4ab87b9639fe89b440102aef55f8c827fff633e1048e02489159-merged.mount: Deactivated successfully.
Feb  2 04:42:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54000d00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:42:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:42:28 np0005604790 podman[103028]: 2026-02-02 09:42:28.438910099 +0000 UTC m=+1.473450783 container remove 42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:42:28 np0005604790 systemd[1]: libpod-conmon-42619575b04c54ed81d5dbffef72118f9d394c7dd14d386249d979e27aa74932.scope: Deactivated successfully.
Feb  2 04:42:28 np0005604790 podman[103071]: 2026-02-02 09:42:28.608443934 +0000 UTC m=+0.059979682 container create a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:42:28 np0005604790 systemd[1]: Started libpod-conmon-a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5.scope.
Feb  2 04:42:28 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb  2 04:42:28 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb  2 04:42:28 np0005604790 podman[103071]: 2026-02-02 09:42:28.581599887 +0000 UTC m=+0.033135675 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e44074a2b4917c9c9e5c2009ae33a78acfda12e134f52ac4f46f483f4603c36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e44074a2b4917c9c9e5c2009ae33a78acfda12e134f52ac4f46f483f4603c36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e44074a2b4917c9c9e5c2009ae33a78acfda12e134f52ac4f46f483f4603c36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e44074a2b4917c9c9e5c2009ae33a78acfda12e134f52ac4f46f483f4603c36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:28 np0005604790 podman[103071]: 2026-02-02 09:42:28.724874311 +0000 UTC m=+0.176410169 container init a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:42:28 np0005604790 podman[103071]: 2026-02-02 09:42:28.734250121 +0000 UTC m=+0.185785879 container start a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:42:28 np0005604790 podman[103071]: 2026-02-02 09:42:28.738820023 +0000 UTC m=+0.190355761 container attach a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 2 peering, 348 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 133 B/s, 5 objects/s recovering
Feb  2 04:42:29 np0005604790 keen_perlman[103088]: {
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:    "1": [
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:        {
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "devices": [
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "/dev/loop3"
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            ],
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "lv_name": "ceph_lv0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "lv_size": "21470642176",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "name": "ceph_lv0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "tags": {
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.cluster_name": "ceph",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.crush_device_class": "",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.encrypted": "0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.osd_id": "1",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.type": "block",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.vdo": "0",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:                "ceph.with_tpm": "0"
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            },
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "type": "block",
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:            "vg_name": "ceph_vg0"
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:        }
Feb  2 04:42:29 np0005604790 keen_perlman[103088]:    ]
Feb  2 04:42:29 np0005604790 keen_perlman[103088]: }
Feb  2 04:42:29 np0005604790 systemd[1]: libpod-a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5.scope: Deactivated successfully.
Feb  2 04:42:29 np0005604790 podman[103071]: 2026-02-02 09:42:29.088884616 +0000 UTC m=+0.540420324 container died a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:42:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3e44074a2b4917c9c9e5c2009ae33a78acfda12e134f52ac4f46f483f4603c36-merged.mount: Deactivated successfully.
Feb  2 04:42:29 np0005604790 podman[103071]: 2026-02-02 09:42:29.135727976 +0000 UTC m=+0.587263704 container remove a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_perlman, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:29 np0005604790 systemd[1]: libpod-conmon-a7ba30141970c397b96b09d4bd43f11c9cf3b6084fb1e4fea39a0dc3dc68b5b5.scope: Deactivated successfully.
Feb  2 04:42:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb  2 04:42:29 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb  2 04:42:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb  2 04:42:29 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 84 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 84 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 84 pg[9.a( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 84 pg[9.a( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb  2 04:42:29 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.743359262 +0000 UTC m=+0.060793903 container create b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 04:42:29 np0005604790 systemd[1]: Started libpod-conmon-b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd.scope.
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.718307064 +0000 UTC m=+0.035741805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.833446176 +0000 UTC m=+0.150880857 container init b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.842013955 +0000 UTC m=+0.159448586 container start b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.847762029 +0000 UTC m=+0.165196750 container attach b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:29 np0005604790 elegant_hamilton[103227]: 167 167
Feb  2 04:42:29 np0005604790 systemd[1]: libpod-b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd.scope: Deactivated successfully.
Feb  2 04:42:29 np0005604790 conmon[103227]: conmon b90a77030ef3299126ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd.scope/container/memory.events
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.852106924 +0000 UTC m=+0.169541605 container died b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-654f9540eeddcecbba07c91d3c014dac9c199b1aae26a7ccfa1e33bce686d253-merged.mount: Deactivated successfully.
Feb  2 04:42:29 np0005604790 podman[103209]: 2026-02-02 09:42:29.899247383 +0000 UTC m=+0.216682014 container remove b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:29 np0005604790 systemd[1]: libpod-conmon-b90a77030ef3299126baef80a4e433659e3567c28623d524b096ba631f06e8fd.scope: Deactivated successfully.
Feb  2 04:42:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.060625139 +0000 UTC m=+0.050126248 container create 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:42:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:30 np0005604790 systemd[1]: Started libpod-conmon-309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05.scope.
Feb  2 04:42:30 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.035815897 +0000 UTC m=+0.025317026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d189209701564b25de9e75d298fff2bd147d1368db08ea8353dea2d65b7ff0f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d189209701564b25de9e75d298fff2bd147d1368db08ea8353dea2d65b7ff0f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d189209701564b25de9e75d298fff2bd147d1368db08ea8353dea2d65b7ff0f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d189209701564b25de9e75d298fff2bd147d1368db08ea8353dea2d65b7ff0f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.15170197 +0000 UTC m=+0.141203079 container init 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.159294043 +0000 UTC m=+0.148795132 container start 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.163270499 +0000 UTC m=+0.152771608 container attach 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:42:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb  2 04:42:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb  2 04:42:30 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb  2 04:42:30 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 85 pg[9.a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=6 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:30 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 85 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=5 ec=57/38 lis/c=82/57 les/c/f=83/58/0 sis=84) [1] r=0 lpr=84 pi=[57,84)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:42:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:42:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:30 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb  2 04:42:30 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb  2 04:42:30 np0005604790 lvm[103342]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:42:30 np0005604790 lvm[103342]: VG ceph_vg0 finished
Feb  2 04:42:30 np0005604790 elegant_lederberg[103268]: {}
Feb  2 04:42:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v19: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 2 peering, 348 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 120 B/s, 5 objects/s recovering
Feb  2 04:42:30 np0005604790 systemd[1]: libpod-309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05.scope: Deactivated successfully.
Feb  2 04:42:30 np0005604790 systemd[1]: libpod-309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05.scope: Consumed 1.185s CPU time.
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.92846334 +0000 UTC m=+0.917964469 container died 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d189209701564b25de9e75d298fff2bd147d1368db08ea8353dea2d65b7ff0f9-merged.mount: Deactivated successfully.
Feb  2 04:42:30 np0005604790 podman[103251]: 2026-02-02 09:42:30.988462631 +0000 UTC m=+0.977963750 container remove 309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_lederberg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:31 np0005604790 systemd[1]: libpod-conmon-309be487830c5db7e0663930bf82e7832467629497ad3832aa6ec95760993a05.scope: Deactivated successfully.
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:31 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Feb  2 04:42:31 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:31 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:42:31 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:42:31 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb  2 04:42:31 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.770890432 +0000 UTC m=+0.042067664 container create 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:42:31 np0005604790 systemd[1]: Started libpod-conmon-6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97.scope.
Feb  2 04:42:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:31 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54001820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.748258038 +0000 UTC m=+0.019435270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:42:31 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.871017834 +0000 UTC m=+0.142195096 container init 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.878259657 +0000 UTC m=+0.149436929 container start 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:31 np0005604790 happy_margulis[103498]: 167 167
Feb  2 04:42:31 np0005604790 systemd[1]: libpod-6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97.scope: Deactivated successfully.
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.890533555 +0000 UTC m=+0.161710817 container attach 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:42:31 np0005604790 podman[103481]: 2026-02-02 09:42:31.891523751 +0000 UTC m=+0.162701003 container died 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:42:31 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2844d8e7cc8d2a5e786ff87e5e5d663f82dcacaeccdf16c8f55761cc3c68fc9d-merged.mount: Deactivated successfully.
Feb  2 04:42:32 np0005604790 podman[103481]: 2026-02-02 09:42:32.034073376 +0000 UTC m=+0.305250648 container remove 6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97 (image=quay.io/ceph/ceph:v19, name=happy_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:42:32 np0005604790 systemd[1]: libpod-conmon-6a919876e09d6542dd96f6ebb06a79c04201d8d701091e2e98d64d606d13bf97.scope: Deactivated successfully.
Feb  2 04:42:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.djvyfo (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.djvyfo (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: Reconfiguring mon.compute-0 (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: Reconfiguring daemon mon.compute-0 on compute-0
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: Reconfiguring mgr.compute-0.djvyfo (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.djvyfo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: Reconfiguring daemon mgr.compute-0.djvyfo on compute-0
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.639550764 +0000 UTC m=+0.055448960 container create b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:32 np0005604790 systemd[1]: Started libpod-conmon-b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387.scope.
Feb  2 04:42:32 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.615082241 +0000 UTC m=+0.030980477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Feb  2 04:42:32 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Feb  2 04:42:32 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.72143267 +0000 UTC m=+0.137330906 container init b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.731711164 +0000 UTC m=+0.147609360 container start b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:42:32 np0005604790 wizardly_cartwright[103626]: 167 167
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.736374768 +0000 UTC m=+0.152272964 container attach b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:32 np0005604790 systemd[1]: libpod-b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387.scope: Deactivated successfully.
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.737328134 +0000 UTC m=+0.153226330 container died b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:32 np0005604790 systemd[1]: var-lib-containers-storage-overlay-07ef3af91f97f5823bda176afdecd31cb96391d2935f7c978d7d345d8949e2af-merged.mount: Deactivated successfully.
Feb  2 04:42:32 np0005604790 podman[103611]: 2026-02-02 09:42:32.775580985 +0000 UTC m=+0.191479181 container remove b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387 (image=quay.io/ceph/ceph:v19, name=wizardly_cartwright, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:42:32 np0005604790 systemd[1]: libpod-conmon-b70e12b2b2c50131a39fac9234ab395c23ec35165623a1995aa3243a9bad2387.scope: Deactivated successfully.
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Feb  2 04:42:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 2 peering, 348 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 3 objects/s recovering
Feb  2 04:42:32 np0005604790 systemd[1]: session-38.scope: Deactivated successfully.
Feb  2 04:42:32 np0005604790 systemd[1]: session-38.scope: Consumed 8.265s CPU time.
Feb  2 04:42:32 np0005604790 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Feb  2 04:42:32 np0005604790 systemd-logind[793]: Removed session 38.
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.296846806 +0000 UTC m=+0.058801090 container create ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:33 np0005604790 systemd[1]: Started libpod-conmon-ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb.scope.
Feb  2 04:42:33 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: Reconfiguring crash.compute-0 (monmap changed)...
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: Reconfiguring daemon crash.compute-0 on compute-0
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.271676835 +0000 UTC m=+0.033631179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.369901346 +0000 UTC m=+0.131855660 container init ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.378890386 +0000 UTC m=+0.140844640 container start ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.383195531 +0000 UTC m=+0.145149875 container attach ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:42:33 np0005604790 amazing_jemison[103728]: 167 167
Feb  2 04:42:33 np0005604790 systemd[1]: libpod-ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb.scope: Deactivated successfully.
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.386355625 +0000 UTC m=+0.148309879 container died ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:42:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-489fec3ff5f6b30afb8eabe9be50d43dec71e88f1da747df8b8031f43fa11c37-merged.mount: Deactivated successfully.
Feb  2 04:42:33 np0005604790 podman[103711]: 2026-02-02 09:42:33.426831115 +0000 UTC m=+0.188785369 container remove ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_jemison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:42:33 np0005604790 systemd[1]: libpod-conmon-ee9fd86423fe38c7931606861ee35cebd5e742be38cc735dd97e6151fcd508bb.scope: Deactivated successfully.
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Feb  2 04:42:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:33 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Feb  2 04:42:33 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Feb  2 04:42:33 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Feb  2 04:42:33 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Feb  2 04:42:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:33 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:33 np0005604790 podman[103813]: 2026-02-02 09:42:33.944322316 +0000 UTC m=+0.040363688 container create 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:42:33 np0005604790 systemd[1]: Started libpod-conmon-832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759.scope.
Feb  2 04:42:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:34.018588228 +0000 UTC m=+0.114629580 container init 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:33.92686575 +0000 UTC m=+0.022907122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:34.024038544 +0000 UTC m=+0.120079896 container start 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:42:34 np0005604790 charming_hugle[103830]: 167 167
Feb  2 04:42:34 np0005604790 systemd[1]: libpod-832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759.scope: Deactivated successfully.
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:34.027179197 +0000 UTC m=+0.123220549 container attach 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:34.028106382 +0000 UTC m=+0.124147724 container died 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:42:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8b2b587b4b24098484d2f31b76a70a558ea852507742d7c5caf1ed1a4d0e7df4-merged.mount: Deactivated successfully.
Feb  2 04:42:34 np0005604790 podman[103813]: 2026-02-02 09:42:34.062694905 +0000 UTC m=+0.158736237 container remove 832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_hugle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:34 np0005604790 systemd[1]: libpod-conmon-832afa57103e145e0ee6a8d74a85580326594737439e464bf9bef445b4556759.scope: Deactivated successfully.
Feb  2 04:42:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54001820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb  2 04:42:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:34.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:42:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: Reconfiguring osd.1 (monmap changed)...
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: Reconfiguring daemon osd.1 on compute-0
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Feb  2 04:42:34 np0005604790 systemd[1]: Stopping Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:34 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Feb  2 04:42:34 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Feb  2 04:42:34 np0005604790 podman[103952]: 2026-02-02 09:42:34.712444596 +0000 UTC m=+0.053635573 container died 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-08ca6510a9460a1e052ad07c489d79b8258b63c4ea4a96f9763769e3f473d4ed-merged.mount: Deactivated successfully.
Feb  2 04:42:34 np0005604790 podman[103952]: 2026-02-02 09:42:34.813878833 +0000 UTC m=+0.155069800 container remove 20ba411300d77fc005ed895ddeeef7f002c6ec8f65727ba9d8e9213579ade944 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:34 np0005604790 bash[103952]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0
Feb  2 04:42:34 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Feb  2 04:42:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:34] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:34] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb  2 04:42:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v21: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb  2 04:42:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Feb  2 04:42:34 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@node-exporter.compute-0.service: Failed with result 'exit-code'.
Feb  2 04:42:34 np0005604790 systemd[1]: Stopped Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:34 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@node-exporter.compute-0.service: Consumed 2.238s CPU time.
Feb  2 04:42:34 np0005604790 systemd[1]: Starting Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:35 np0005604790 podman[104056]: 2026-02-02 09:42:35.094979495 +0000 UTC m=+0.030358382 container create 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0364347fd3778f4fdbef6fb687e0de8642f7f1efe2433056c217f84d0ede74/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:35 np0005604790 podman[104056]: 2026-02-02 09:42:35.150765552 +0000 UTC m=+0.086144469 container init 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 podman[104056]: 2026-02-02 09:42:35.156413643 +0000 UTC m=+0.091792530 container start 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.161Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.161Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.162Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.162Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.162Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.162Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=arp
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=bcache
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=bonding
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=btrfs
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=conntrack
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=cpu
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=diskstats
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=dmi
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=edac
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=entropy
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=filefd
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=filesystem
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=hwmon
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=infiniband
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=ipvs
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=loadavg
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=mdadm
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=meminfo
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=netclass
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=netdev
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=netstat
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=nfs
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=nfsd
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=nvme
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=os
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=pressure
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=rapl
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=schedstat
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=selinux
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=sockstat
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=softnet
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=stat
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=tapestats
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=textfile
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=thermal_zone
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=time
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=uname
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=vmstat
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=xfs
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.163Z caller=node_exporter.go:117 level=info collector=zfs
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.164Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0[104072]: ts=2026-02-02T09:42:35.164Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Feb  2 04:42:35 np0005604790 bash[104056]: 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998
Feb  2 04:42:35 np0005604790 podman[104056]: 2026-02-02 09:42:35.08058303 +0000 UTC m=+0.015961937 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Feb  2 04:42:35 np0005604790 systemd[1]: Started Ceph node-exporter.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:35 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb  2 04:42:35 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb  2 04:42:35 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb  2 04:42:35 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Feb  2 04:42:35 np0005604790 ceph-mon[74489]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Feb  2 04:42:35 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Feb  2 04:42:35 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.761732688 +0000 UTC m=+0.043752399 volume create f07495f3992928b33406e04b5aa00c5fcbf81bcb287e443a473966ef9825c8d5
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.775120755 +0000 UTC m=+0.057140456 container create 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 systemd[1]: Started libpod-conmon-1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217.scope.
Feb  2 04:42:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:35 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.739248348 +0000 UTC m=+0.021268079 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:42:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd7885aeb43e647d05ac1231e69976840f19b9b3f3c284a5f8f27a397ebdebb/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.872464803 +0000 UTC m=+0.154484514 container init 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.882344907 +0000 UTC m=+0.164364618 container start 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 infallible_mayer[104165]: 65534 65534
Feb  2 04:42:35 np0005604790 systemd[1]: libpod-1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217.scope: Deactivated successfully.
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.88809014 +0000 UTC m=+0.170109861 container attach 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.88919199 +0000 UTC m=+0.171211731 container died 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2fd7885aeb43e647d05ac1231e69976840f19b9b3f3c284a5f8f27a397ebdebb-merged.mount: Deactivated successfully.
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.940225551 +0000 UTC m=+0.222245253 container remove 1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217 (image=quay.io/prometheus/alertmanager:v0.25.0, name=infallible_mayer, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:35 np0005604790 podman[104148]: 2026-02-02 09:42:35.943726945 +0000 UTC m=+0.225746656 volume remove f07495f3992928b33406e04b5aa00c5fcbf81bcb287e443a473966ef9825c8d5
Feb  2 04:42:35 np0005604790 systemd[1]: libpod-conmon-1cb124155eaf0d9353ff9280663df370ff5da7688984ff0252fe4e34b8f5c217.scope: Deactivated successfully.
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.011304178 +0000 UTC m=+0.044734874 volume create 20b9148e91ae61dd849d5e4a3c353c7d339680fbaf263bda6ab21343bf3e7ced
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.023086023 +0000 UTC m=+0.056516719 container create 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 systemd[1]: Started libpod-conmon-94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3.scope.
Feb  2 04:42:36 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e80a1445aac5109c5f79a43d21518d8fe14f4eb7cf0911385b79be9352cd79f/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.092173447 +0000 UTC m=+0.125604143 container init 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:35.997964152 +0000 UTC m=+0.031394868 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.097610662 +0000 UTC m=+0.131041398 container start 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 cranky_mcclintock[104201]: 65534 65534
Feb  2 04:42:36 np0005604790 systemd[1]: libpod-94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3.scope: Deactivated successfully.
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.102159103 +0000 UTC m=+0.135589829 container attach 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.102700208 +0000 UTC m=+0.136130914 container died 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2e80a1445aac5109c5f79a43d21518d8fe14f4eb7cf0911385b79be9352cd79f-merged.mount: Deactivated successfully.
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.140078465 +0000 UTC m=+0.173509161 container remove 94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=cranky_mcclintock, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 systemd[1]: libpod-conmon-94e626c08213037cf54c886cb688f89a2ed83263c2c59c8a97a1da20739d37a3.scope: Deactivated successfully.
Feb  2 04:42:36 np0005604790 podman[104185]: 2026-02-02 09:42:36.146959839 +0000 UTC m=+0.180390535 volume remove 20b9148e91ae61dd849d5e4a3c353c7d339680fbaf263bda6ab21343bf3e7ced
Feb  2 04:42:36 np0005604790 systemd[1]: Stopping Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:36.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54001820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[98607]: ts=2026-02-02T09:42:36.417Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Feb  2 04:42:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:36 np0005604790 podman[104252]: 2026-02-02 09:42:36.427050354 +0000 UTC m=+0.050659053 container died d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6aaf21e619ca7c96d0cf4441b8035c0c2a94c0f3e43ac03ac7a6d3f919758f98-merged.mount: Deactivated successfully.
Feb  2 04:42:36 np0005604790 podman[104252]: 2026-02-02 09:42:36.4703742 +0000 UTC m=+0.093982819 container remove d55860d12598ad8f4ad20d7c290c8e7601e49b37966d3cbaf9293eade56ad034 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 podman[104252]: 2026-02-02 09:42:36.475461346 +0000 UTC m=+0.099070005 volume remove 9b860f62a5bdfb20ba246fb3183f32e7905709367584801b9a37e9808e7953bd
Feb  2 04:42:36 np0005604790 bash[104252]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0
Feb  2 04:42:36 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb  2 04:42:36 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@alertmanager.compute-0.service: Deactivated successfully.
Feb  2 04:42:36 np0005604790 systemd[1]: Stopped Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:36 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@alertmanager.compute-0.service: Consumed 1.194s CPU time.
Feb  2 04:42:36 np0005604790 systemd[1]: Starting Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:36 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb  2 04:42:36 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb  2 04:42:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094236 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:42:36 np0005604790 podman[104351]: 2026-02-02 09:42:36.873046176 +0000 UTC m=+0.057644479 volume create 9e10e6e0bb4ee775fe1284be9a1e34090bd6a4a9047293c2c0995635c9323b02
Feb  2 04:42:36 np0005604790 podman[104351]: 2026-02-02 09:42:36.892785913 +0000 UTC m=+0.077384176 container create 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Feb  2 04:42:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb  2 04:42:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Feb  2 04:42:36 np0005604790 podman[104351]: 2026-02-02 09:42:36.839687616 +0000 UTC m=+0.024285979 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Feb  2 04:42:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e19a8220a0bc02e8b43787fec89d8a49b4998fecb8445945875c198ac4633/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e19a8220a0bc02e8b43787fec89d8a49b4998fecb8445945875c198ac4633/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:36 np0005604790 podman[104351]: 2026-02-02 09:42:36.990421079 +0000 UTC m=+0.175019372 container init 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:36 np0005604790 podman[104351]: 2026-02-02 09:42:36.996756298 +0000 UTC m=+0.181354551 container start 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:37 np0005604790 bash[104351]: 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4
Feb  2 04:42:37 np0005604790 systemd[1]: Started Ceph alertmanager.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.027Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.027Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.034Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.036Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.083Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.084Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.089Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:37.089Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:37 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Feb  2 04:42:37 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Feb  2 04:42:37 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Feb  2 04:42:37 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb  2 04:42:37 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 87 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=72/72 les/c/f=73/73/0 sis=87) [1] r=0 lpr=87 pi=[72,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:37 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 87 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=87) [1] r=0 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: Reconfiguring grafana.compute-0 (dependencies changed)...
Feb  2 04:42:37 np0005604790 ceph-mon[74489]: Reconfiguring daemon grafana.compute-0 on compute-0
Feb  2 04:42:37 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb  2 04:42:37 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.752882777 +0000 UTC m=+0.064406759 container create 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 systemd[1]: Started libpod-conmon-184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63.scope.
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.72412907 +0000 UTC m=+0.035653142 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:42:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:37 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:37 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.850657657 +0000 UTC m=+0.162181669 container init 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.857555411 +0000 UTC m=+0.169079403 container start 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 systemd[1]: libpod-184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63.scope: Deactivated successfully.
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.862357379 +0000 UTC m=+0.173881381 container attach 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 gallant_chandrasekhar[104471]: 472 0
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.863724656 +0000 UTC m=+0.175248648 container died 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 conmon[104471]: conmon 184879c7978b7f19b38d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63.scope/container/memory.events
Feb  2 04:42:37 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7a1b3255f07b55a69c425c07d057fabd22c7fccda6dd44ea5bfa6a17412f07c9-merged.mount: Deactivated successfully.
Feb  2 04:42:37 np0005604790 podman[104453]: 2026-02-02 09:42:37.917522301 +0000 UTC m=+0.229046353 container remove 184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63 (image=quay.io/ceph/grafana:10.4.0, name=gallant_chandrasekhar, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:37 np0005604790 systemd[1]: libpod-conmon-184879c7978b7f19b38df5b4752bc7379058e5ac968733fb95914488f75b7e63.scope: Deactivated successfully.
Feb  2 04:42:37 np0005604790 podman[104488]: 2026-02-02 09:42:37.986645716 +0000 UTC m=+0.050700254 container create 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 systemd[1]: Started libpod-conmon-4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b.scope.
Feb  2 04:42:38 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:38.049166175 +0000 UTC m=+0.113220833 container init 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:38.053519691 +0000 UTC m=+0.117574209 container start 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 musing_bartik[104504]: 472 0
Feb  2 04:42:38 np0005604790 systemd[1]: libpod-4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b.scope: Deactivated successfully.
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:37.962392169 +0000 UTC m=+0.026446717 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:38.061137794 +0000 UTC m=+0.125192312 container attach 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:38.061672638 +0000 UTC m=+0.125727176 container died 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e2010113f528a2742a58eea601bef05f4300b6cd688f7b25133b3dded221bae8-merged.mount: Deactivated successfully.
Feb  2 04:42:38 np0005604790 podman[104488]: 2026-02-02 09:42:38.134527663 +0000 UTC m=+0.198582221 container remove 4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b (image=quay.io/ceph/grafana:10.4.0, name=musing_bartik, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 systemd[1]: libpod-conmon-4fd40efbfdfc6b8fbc5ed1c441b1cada3195bf31b644ceb53f6a345f30ecb98b.scope: Deactivated successfully.
Feb  2 04:42:38 np0005604790 systemd[1]: Stopping Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=server t=2026-02-02T09:42:38.479088908Z level=info msg="Shutdown started" reason="System signal: terminated"
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=tracing t=2026-02-02T09:42:38.479348465Z level=info msg="Closing tracing"
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=ticker t=2026-02-02T09:42:38.479419097Z level=info msg=stopped last_tick=2026-02-02T09:42:30Z
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-02-02T09:42:38.480701531Z level=info msg="StorageObjectCountTracker pruner is exiting"
Feb  2 04:42:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-02-02T09:42:38.491559431Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb  2 04:42:38 np0005604790 podman[104555]: 2026-02-02 09:42:38.509643544 +0000 UTC m=+0.072264370 container died 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c0fc3b4c4afd673b83b7a1da67922de626d76a1ee59ac3d8dafacc8a8412000d-merged.mount: Deactivated successfully.
Feb  2 04:42:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb  2 04:42:38 np0005604790 podman[104555]: 2026-02-02 09:42:38.556639668 +0000 UTC m=+0.119260494 container remove 444445befc58784fb5be994b3d6a442e59c1152d326f283d9b64377a2bb5c634 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 bash[104555]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0
Feb  2 04:42:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb  2 04:42:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 88 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[73,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=72/72 les/c/f=73/73/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[72,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 88 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=72/72 les/c/f=73/73/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[72,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb  2 04:42:38 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@grafana.compute-0.service: Deactivated successfully.
Feb  2 04:42:38 np0005604790 systemd[1]: Stopped Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:38 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@grafana.compute-0.service: Consumed 4.572s CPU time.
Feb  2 04:42:38 np0005604790 systemd[1]: Starting Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb  2 04:42:38 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb  2 04:42:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Feb  2 04:42:38 np0005604790 podman[104658]: 2026-02-02 09:42:38.934339938 +0000 UTC m=+0.060218529 container create 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5a43e61efdbc66b7eba7ed51e4c81ce2a960bf771ac1f3ef2d331cf7e81a14/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5a43e61efdbc66b7eba7ed51e4c81ce2a960bf771ac1f3ef2d331cf7e81a14/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5a43e61efdbc66b7eba7ed51e4c81ce2a960bf771ac1f3ef2d331cf7e81a14/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5a43e61efdbc66b7eba7ed51e4c81ce2a960bf771ac1f3ef2d331cf7e81a14/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5a43e61efdbc66b7eba7ed51e4c81ce2a960bf771ac1f3ef2d331cf7e81a14/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:38 np0005604790 podman[104658]: 2026-02-02 09:42:38.902723205 +0000 UTC m=+0.028601896 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Feb  2 04:42:39 np0005604790 podman[104658]: 2026-02-02 09:42:39.01384664 +0000 UTC m=+0.139725261 container init 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:39 np0005604790 podman[104658]: 2026-02-02 09:42:39.027541474 +0000 UTC m=+0.153420065 container start 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:39 np0005604790 bash[104658]: 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2
Feb  2 04:42:39 np0005604790 systemd[1]: Started Ceph grafana.compute-0 for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:39.037Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000628497s
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27124303Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-02-02T09:42:39Z
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271567948Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271575019Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271578769Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271582129Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271585079Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271588219Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271591739Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271596439Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271600009Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271603389Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27160647Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27161099Z level=info msg=Target target=[all]
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27162028Z level=info msg="Path Home" path=/usr/share/grafana
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27163423Z level=info msg="Path Data" path=/var/lib/grafana
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27163805Z level=info msg="Path Logs" path=/var/log/grafana
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.27164218Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271647291Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=settings t=2026-02-02T09:42:39.271651241Z level=info msg="App mode production"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=sqlstore t=2026-02-02T09:42:39.271959879Z level=info msg="Connecting to DB" dbtype=sqlite3
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=sqlstore t=2026-02-02T09:42:39.27197799Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=migrator t=2026-02-02T09:42:39.272856954Z level=info msg="Starting DB migrations"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=migrator t=2026-02-02T09:42:39.289368035Z level=info msg="migrations completed" performed=0 skipped=547 duration=624.898µs
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=sqlstore t=2026-02-02T09:42:39.290633359Z level=info msg="Created default organization"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=secrets t=2026-02-02T09:42:39.291274807Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugin.store t=2026-02-02T09:42:39.310221414Z level=info msg="Loading plugins..."
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=local.finder t=2026-02-02T09:42:39.387096194Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugin.store t=2026-02-02T09:42:39.387142845Z level=info msg="Plugins loaded" count=55 duration=76.922831ms
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=query_data t=2026-02-02T09:42:39.390442665Z level=info msg="Query Service initialization"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=live.push_http t=2026-02-02T09:42:39.394469955Z level=info msg="Live Push Gateway initialization"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.migration t=2026-02-02T09:42:39.398999359Z level=info msg=Starting
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.state.manager t=2026-02-02T09:42:39.420839635Z level=info msg="Running in alternative execution of Error/NoData mode"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=infra.usagestats.collector t=2026-02-02T09:42:39.424991289Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=provisioning.datasources t=2026-02-02T09:42:39.429718038Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=provisioning.alerting t=2026-02-02T09:42:39.465097064Z level=info msg="starting to provision alerting"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=provisioning.alerting t=2026-02-02T09:42:39.465129015Z level=info msg="finished to provision alerting"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.state.manager t=2026-02-02T09:42:39.466676277Z level=info msg="Warming state cache for startup"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.multiorg.alertmanager t=2026-02-02T09:42:39.467476519Z level=info msg="Starting MultiOrg Alertmanager"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.state.manager t=2026-02-02T09:42:39.467662474Z level=info msg="State cache has been initialized" states=0 duration=984.817µs
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ngalert.scheduler t=2026-02-02T09:42:39.467758647Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=ticker t=2026-02-02T09:42:39.467918411Z level=info msg=starting first_tick=2026-02-02T09:42:40Z
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=http.server t=2026-02-02T09:42:39.471629112Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=http.server t=2026-02-02T09:42:39.472127676Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafanaStorageLogger t=2026-02-02T09:42:39.483191928Z level=info msg="Storage starting"
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana.update.checker t=2026-02-02T09:42:39.541549462Z level=info msg="Update check succeeded" duration=75.852602ms
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugins.update.checker t=2026-02-02T09:42:39.54404293Z level=info msg="Update check succeeded" duration=78.121944ms
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=provisioning.dashboard t=2026-02-02T09:42:39.553228571Z level=info msg="starting to provision dashboards"
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=provisioning.dashboard t=2026-02-02T09:42:39.580910647Z level=info msg="finished to provision dashboards"
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: Reconfiguring crash.compute-1 (monmap changed)...
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: Reconfiguring daemon crash.compute-1 on compute-1
Feb  2 04:42:39 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb  2 04:42:39 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Feb  2 04:42:39 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Feb  2 04:42:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana-apiserver t=2026-02-02T09:42:40.070740045Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Feb  2 04:42:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana-apiserver t=2026-02-02T09:42:40.071109175Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Feb  2 04:42:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: Reconfiguring osd.0 (monmap changed)...
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: Reconfiguring daemon osd.0 on compute-1
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 90 pg[9.d( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=8 ec=57/38 lis/c=88/73 les/c/f=89/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 90 pg[9.d( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=8 ec=57/38 lis/c=88/73 les/c/f=89/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 90 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=88/72 les/c/f=89/73/0 sis=90) [1] r=0 lpr=90 pi=[72,90)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 90 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=88/72 les/c/f=89/73/0 sis=90) [1] r=0 lpr=90 pi=[72,90)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:40 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Feb  2 04:42:40 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:40 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Feb  2 04:42:40 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb  2 04:42:40 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb  2 04:42:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Feb  2 04:42:41 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:41 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Feb  2 04:42:41 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: Reconfiguring mon.compute-1 (monmap changed)...
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: Reconfiguring daemon mon.compute-1 on compute-1
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Feb  2 04:42:41 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 91 pg[9.d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=8 ec=57/38 lis/c=88/73 les/c/f=89/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:41 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 91 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=88/72 les/c/f=89/73/0 sis=90) [1] r=0 lpr=90 pi=[72,90)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:41 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb  2 04:42:41 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb  2 04:42:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:41 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.gzlyac (monmap changed)...
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.gzlyac (monmap changed)...
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.gzlyac on compute-2
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.gzlyac on compute-2
Feb  2 04:42:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:42.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring haproxy.rgw.default.compute-2.txhwfs (unknown last config time)...
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring haproxy.rgw.default.compute-2.txhwfs (unknown last config time)...
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: [cephadm INFO cephadm.serve] Reconfiguring daemon haproxy.rgw.default.compute-2.txhwfs on compute-2
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: log_channel(cephadm) log [INF] : Reconfiguring daemon haproxy.rgw.default.compute-2.txhwfs on compute-2
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: Reconfiguring mon.compute-2 (monmap changed)...
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: Reconfiguring daemon mon.compute-2 on compute-2
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: Reconfiguring mgr.compute-2.gzlyac (monmap changed)...
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gzlyac", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: Reconfiguring daemon mgr.compute-2.gzlyac on compute-2
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:42 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb  2 04:42:42 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb  2 04:42:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094243 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: Reconfiguring haproxy.rgw.default.compute-2.txhwfs (unknown last config time)...
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: Reconfiguring daemon haproxy.rgw.default.compute-2.txhwfs on compute-2
Feb  2 04:42:43 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb  2 04:42:43 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb  2 04:42:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:43 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:43.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb  2 04:42:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO root] Restarting engine...
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE Bus STOPPING
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE Bus STOPPING
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE Bus STOPPED
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE Bus STARTING
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE Bus STOPPED
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE Bus STARTING
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE Serving on http://:::9283
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: [02/Feb/2026:09:42:44] ENGINE Bus STARTED
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE Serving on http://:::9283
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.error] [02/Feb/2026:09:42:44] ENGINE Bus STARTED
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO root] Engine started.
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:44 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb  2 04:42:44 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:44] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb  2 04:42:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:44] "GET /metrics HTTP/1.1" 200 48366 "" "Prometheus/2.51.0"
Feb  2 04:42:44 np0005604790 podman[104836]: 2026-02-02 09:42:44.888932385 +0000 UTC m=+0.116393790 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb  2 04:42:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Feb  2 04:42:45 np0005604790 podman[104836]: 2026-02-02 09:42:45.010045932 +0000 UTC m=+0.237507317 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:45 np0005604790 podman[104974]: 2026-02-02 09:42:45.650604147 +0000 UTC m=+0.072420019 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:42:45 np0005604790 podman[104974]: 2026-02-02 09:42:45.669083522 +0000 UTC m=+0.090899394 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:42:45 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb  2 04:42:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb  2 04:42:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 04:42:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb  2 04:42:45 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb  2 04:42:45 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb  2 04:42:45 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Feb  2 04:42:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:45 np0005604790 podman[105042]: 2026-02-02 09:42:45.875667743 +0000 UTC m=+0.054570031 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:45 np0005604790 podman[105042]: 2026-02-02 09:42:45.884253317 +0000 UTC m=+0.063155585 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:46 np0005604790 podman[105116]: 2026-02-02 09:42:46.159983958 +0000 UTC m=+0.075791361 container exec 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 04:42:46 np0005604790 podman[105116]: 2026-02-02 09:42:46.170909906 +0000 UTC m=+0.086717309 container exec_died 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:46 np0005604790 podman[105202]: 2026-02-02 09:42:46.528660727 +0000 UTC m=+0.072295666 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.openshift.tags=Ceph keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793)
Feb  2 04:42:46 np0005604790 podman[105202]: 2026-02-02 09:42:46.543027309 +0000 UTC m=+0.086662258 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Feb  2 04:42:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb  2 04:42:46 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb  2 04:42:46 np0005604790 podman[105267]: 2026-02-02 09:42:46.815980994 +0000 UTC m=+0.069911371 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:46 np0005604790 podman[105267]: 2026-02-02 09:42:46.855921364 +0000 UTC m=+0.109851691 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 50 B/s, 1 objects/s recovering
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb  2 04:42:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:42:47.039Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003154924s
Feb  2 04:42:47 np0005604790 podman[105344]: 2026-02-02 09:42:47.107380122 +0000 UTC m=+0.063380942 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:42:47 np0005604790 podman[105344]: 2026-02-02 09:42:47.324279246 +0000 UTC m=+0.280280026 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:42:47 np0005604790 podman[105440]: 2026-02-02 09:42:47.679718623 +0000 UTC m=+0.055491676 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:47 np0005604790 podman[105440]: 2026-02-02 09:42:47.736915095 +0000 UTC m=+0.112688148 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:47 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb  2 04:42:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:47 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb  2 04:42:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:47.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:47 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb  2 04:42:47 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 93 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=67/67 les/c/f=68/68/0 sis=93) [1] r=0 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:47 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 93 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=68/68 les/c/f=69/69/0 sis=93) [1] r=0 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 50 B/s, 1 objects/s recovering
Feb  2 04:42:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 2 objects/s recovering
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:42:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:42:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.495822402 +0000 UTC m=+0.054732776 container create a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:48 np0005604790 systemd[93258]: Starting Mark boot as successful...
Feb  2 04:42:48 np0005604790 systemd[1]: Started libpod-conmon-a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d.scope.
Feb  2 04:42:48 np0005604790 systemd[93258]: Finished Mark boot as successful.
Feb  2 04:42:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.564709224 +0000 UTC m=+0.123619638 container init a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.472831274 +0000 UTC m=+0.031741648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.575160119 +0000 UTC m=+0.134070493 container start a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.579007274 +0000 UTC m=+0.137917688 container attach a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:48 np0005604790 condescending_archimedes[105593]: 167 167
Feb  2 04:42:48 np0005604790 systemd[1]: libpod-a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d.scope: Deactivated successfully.
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.581690277 +0000 UTC m=+0.140600611 container died a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-314f5d019ec4026d21f6931fceec055b3fa7fce321252b68cdea9428fdac5ef1-merged.mount: Deactivated successfully.
Feb  2 04:42:48 np0005604790 podman[105576]: 2026-02-02 09:42:48.626401958 +0000 UTC m=+0.185312302 container remove a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:42:48 np0005604790 systemd[1]: libpod-conmon-a3c6861d217b9f30969ef276bfbe5f10a100de5162668d9bc7585adf6ef6045d.scope: Deactivated successfully.
Feb  2 04:42:48 np0005604790 podman[105617]: 2026-02-02 09:42:48.813781726 +0000 UTC m=+0.070826875 container create 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:42:48 np0005604790 systemd[1]: Started libpod-conmon-44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5.scope.
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb  2 04:42:48 np0005604790 podman[105617]: 2026-02-02 09:42:48.781397782 +0000 UTC m=+0.038442951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb  2 04:42:48 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb  2 04:42:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 94 pg[9.10( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=94) [1] r=0 lpr=94 pi=[57,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=68/68 les/c/f=69/69/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[68,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 94 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=68/68 les/c/f=69/69/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[68,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:48 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 94 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[2] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:48 np0005604790 systemd-logind[793]: New session 39 of user zuul.
Feb  2 04:42:48 np0005604790 podman[105617]: 2026-02-02 09:42:48.924616183 +0000 UTC m=+0.181661352 container init 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:42:48 np0005604790 systemd[1]: Started Session 39 of User zuul.
Feb  2 04:42:48 np0005604790 podman[105617]: 2026-02-02 09:42:48.934791691 +0000 UTC m=+0.191836850 container start 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:48 np0005604790 podman[105617]: 2026-02-02 09:42:48.938876943 +0000 UTC m=+0.195922102 container attach 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:49 np0005604790 musing_feynman[105636]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:42:49 np0005604790 musing_feynman[105636]: --> All data devices are unavailable
Feb  2 04:42:49 np0005604790 systemd[1]: libpod-44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5.scope: Deactivated successfully.
Feb  2 04:42:49 np0005604790 podman[105617]: 2026-02-02 09:42:49.348756937 +0000 UTC m=+0.605802076 container died 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:42:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0929aedecf0c329da669b1ce427bf12fb1f4449811e2c4b2451b08760af916a8-merged.mount: Deactivated successfully.
Feb  2 04:42:49 np0005604790 podman[105617]: 2026-02-02 09:42:49.42210698 +0000 UTC m=+0.679152109 container remove 44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb  2 04:42:49 np0005604790 systemd[1]: libpod-conmon-44bafcd666714dceb72796bbe75d3524a3e2cc6ca72a389cfc1055d838f0e7e5.scope: Deactivated successfully.
Feb  2 04:42:49 np0005604790 python3.9[105838]: ansible-ansible.legacy.ping Invoked with data=pong
Feb  2 04:42:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:49 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:49.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb  2 04:42:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb  2 04:42:49 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb  2 04:42:49 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb  2 04:42:49 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[57,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:49 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 95 pg[9.10( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[57,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.052191117 +0000 UTC m=+0.054213781 container create 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 04:42:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:50 np0005604790 systemd[1]: Started libpod-conmon-3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135.scope.
Feb  2 04:42:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.03398633 +0000 UTC m=+0.036008994 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.143975394 +0000 UTC m=+0.145998088 container init 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.150707788 +0000 UTC m=+0.152730482 container start 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:42:50 np0005604790 bold_agnesi[105997]: 167 167
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.157445952 +0000 UTC m=+0.159468646 container attach 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:42:50 np0005604790 systemd[1]: libpod-3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135.scope: Deactivated successfully.
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.159171849 +0000 UTC m=+0.161194543 container died 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:42:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6130b89981188fdf3214fedc9c82b35aeaaa47045206bc1db883134a688a7cbd-merged.mount: Deactivated successfully.
Feb  2 04:42:50 np0005604790 podman[105953]: 2026-02-02 09:42:50.210062029 +0000 UTC m=+0.212084723 container remove 3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_agnesi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:50 np0005604790 systemd[1]: libpod-conmon-3471321d81e26c8f3cbe06c13ad4afb201b30d4b31e5c6414a1d62a297a39135.scope: Deactivated successfully.
Feb  2 04:42:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.417624888 +0000 UTC m=+0.091460049 container create e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.362802331 +0000 UTC m=+0.036637572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:50 np0005604790 systemd[1]: Started libpod-conmon-e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd.scope.
Feb  2 04:42:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f595e1efd43d7491659d827131d432d3e68a6cffd55fbafc09afa954ecbfceaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f595e1efd43d7491659d827131d432d3e68a6cffd55fbafc09afa954ecbfceaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f595e1efd43d7491659d827131d432d3e68a6cffd55fbafc09afa954ecbfceaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f595e1efd43d7491659d827131d432d3e68a6cffd55fbafc09afa954ecbfceaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.535097616 +0000 UTC m=+0.208932797 container init e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.542967011 +0000 UTC m=+0.216802172 container start e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.546680473 +0000 UTC m=+0.220515634 container attach e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]: {
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:    "1": [
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:        {
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "devices": [
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "/dev/loop3"
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            ],
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "lv_name": "ceph_lv0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "lv_size": "21470642176",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "name": "ceph_lv0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "tags": {
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.cluster_name": "ceph",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.crush_device_class": "",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.encrypted": "0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.osd_id": "1",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.type": "block",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.vdo": "0",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:                "ceph.with_tpm": "0"
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            },
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "type": "block",
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:            "vg_name": "ceph_vg0"
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:        }
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]:    ]
Feb  2 04:42:50 np0005604790 crazy_swartz[106053]: }
Feb  2 04:42:50 np0005604790 systemd[1]: libpod-e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd.scope: Deactivated successfully.
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.836347834 +0000 UTC m=+0.510182995 container died e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f595e1efd43d7491659d827131d432d3e68a6cffd55fbafc09afa954ecbfceaa-merged.mount: Deactivated successfully.
Feb  2 04:42:50 np0005604790 podman[106024]: 2026-02-02 09:42:50.884572221 +0000 UTC m=+0.558407422 container remove e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb  2 04:42:50 np0005604790 systemd[1]: libpod-conmon-e07215b75a7f5c3e5a9649ec0aebcee3ebc37d4fb7739fa4eca9894d0ffa50cd.scope: Deactivated successfully.
Feb  2 04:42:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb  2 04:42:50 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb  2 04:42:50 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 96 pg[9.f( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=94/68 les/c/f=95/69/0 sis=96) [1] r=0 lpr=96 pi=[68,96)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:50 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 96 pg[9.f( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=6 ec=57/38 lis/c=94/68 les/c/f=95/69/0 sis=96) [1] r=0 lpr=96 pi=[68,96)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:50 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 96 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:50 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 96 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:51 np0005604790 python3.9[106144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:42:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb  2 04:42:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 97 pg[9.10( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=2 ec=57/38 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 97 pg[9.10( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=2 ec=57/38 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:51 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 97 pg[9.f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=6 ec=57/38 lis/c=94/68 les/c/f=95/69/0 sis=96) [1] r=0 lpr=96 pi=[68,96)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 97 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.479197941 +0000 UTC m=+0.048612739 container create 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:42:51 np0005604790 systemd[1]: Started libpod-conmon-23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8.scope.
Feb  2 04:42:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.463619635 +0000 UTC m=+0.033034453 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.566880055 +0000 UTC m=+0.136294943 container init 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.577531936 +0000 UTC m=+0.146946734 container start 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:42:51 np0005604790 relaxed_sinoussi[106319]: 167 167
Feb  2 04:42:51 np0005604790 systemd[1]: libpod-23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8.scope: Deactivated successfully.
Feb  2 04:42:51 np0005604790 conmon[106319]: conmon 23ffeeeb325432d622bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8.scope/container/memory.events
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.585878544 +0000 UTC m=+0.155293422 container attach 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.586630675 +0000 UTC m=+0.156045503 container died 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:42:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ed4c3512df708b86ac036254f4a44429ef57219d35d7b4db6b97fa3c1e7d4c39-merged.mount: Deactivated successfully.
Feb  2 04:42:51 np0005604790 podman[106302]: 2026-02-02 09:42:51.634973675 +0000 UTC m=+0.204388473 container remove 23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:42:51 np0005604790 systemd[1]: libpod-conmon-23ffeeeb325432d622bd299cdd8fe8d9a26ed0440db7b814af5aadfd176baaf8.scope: Deactivated successfully.
Feb  2 04:42:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:51 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:51.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:51 np0005604790 podman[106394]: 2026-02-02 09:42:51.863088555 +0000 UTC m=+0.103135737 container create 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:42:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:51 np0005604790 podman[106394]: 2026-02-02 09:42:51.792346363 +0000 UTC m=+0.032393565 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:42:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:51 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb  2 04:42:51 np0005604790 systemd[1]: Started libpod-conmon-97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9.scope.
Feb  2 04:42:51 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb  2 04:42:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:42:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445468c6b15ea299293546500dfafde5b6aaeb98b02cb0ac9889eff175500306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445468c6b15ea299293546500dfafde5b6aaeb98b02cb0ac9889eff175500306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445468c6b15ea299293546500dfafde5b6aaeb98b02cb0ac9889eff175500306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445468c6b15ea299293546500dfafde5b6aaeb98b02cb0ac9889eff175500306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:42:52 np0005604790 podman[106394]: 2026-02-02 09:42:52.006724128 +0000 UTC m=+0.246771290 container init 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:42:52 np0005604790 podman[106394]: 2026-02-02 09:42:52.012093055 +0000 UTC m=+0.252140197 container start 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:42:52 np0005604790 podman[106394]: 2026-02-02 09:42:52.038001682 +0000 UTC m=+0.278048844 container attach 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 04:42:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:52 np0005604790 python3.9[106491]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb  2 04:42:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb  2 04:42:52 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 98 pg[9.10( v 44'1041 (0'0,44'1041] local-lis/les=97/98 n=2 ec=57/38 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:52 np0005604790 lvm[106586]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:42:52 np0005604790 lvm[106586]: VG ceph_vg0 finished
Feb  2 04:42:52 np0005604790 practical_ardinghelli[106434]: {}
Feb  2 04:42:52 np0005604790 podman[106394]: 2026-02-02 09:42:52.796898899 +0000 UTC m=+1.036946041 container died 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:42:52 np0005604790 systemd[1]: libpod-97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9.scope: Deactivated successfully.
Feb  2 04:42:52 np0005604790 systemd[1]: libpod-97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9.scope: Consumed 1.153s CPU time.
Feb  2 04:42:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-445468c6b15ea299293546500dfafde5b6aaeb98b02cb0ac9889eff175500306-merged.mount: Deactivated successfully.
Feb  2 04:42:52 np0005604790 podman[106394]: 2026-02-02 09:42:52.858842601 +0000 UTC m=+1.098889743 container remove 97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:42:52 np0005604790 systemd[1]: libpod-conmon-97b9249290cb936f24af26ab79a0bc8c206178b149c0e941b3584643e06f70c9.scope: Deactivated successfully.
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:42:52 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:42:52 np0005604790 ceph-osd[82705]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Feb  2 04:42:52 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:53 np0005604790 python3.9[106754]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:42:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:53 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb  2 04:42:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb  2 04:42:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50001f70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:42:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Feb  2 04:42:54 np0005604790 python3.9[106910]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:42:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:54 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 99 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=99) [1] r=0 lpr=99 pi=[57,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:54] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:42:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:42:54] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:42:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:42:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:42:55 np0005604790 python3.9[107062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb  2 04:42:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[57,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:55 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 100 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=100) [1]/[0] r=-1 lpr=100 pi=[57,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:55 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb  2 04:42:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Feb  2 04:42:55 np0005604790 python3.9[107213]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:42:56 np0005604790 network[107232]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:42:56 np0005604790 network[107233]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:42:56 np0005604790 network[107234]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:42:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003db0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb  2 04:42:56 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 101 pg[9.12( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=101) [1] r=0 lpr=101 pi=[57,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb  2 04:42:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb  2 04:42:56 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 102 pg[9.11( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=100/57 les/c/f=101/58/0 sis=102) [1] r=0 lpr=102 pi=[57,102)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:56 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 102 pg[9.11( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=5 ec=57/38 lis/c=100/57 les/c/f=101/58/0 sis=102) [1] r=0 lpr=102 pi=[57,102)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:56 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 102 pg[9.12( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[57,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:56 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 102 pg[9.12( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=58/58/0 sis=102) [1]/[0] r=-1 lpr=102 pi=[57,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:42:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.003000082s ======
Feb  2 04:42:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:56.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Feb  2 04:42:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb  2 04:42:57 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 103 pg[9.11( v 44'1041 (0'0,44'1041] local-lis/les=102/103 n=5 ec=57/38 lis/c=100/57 les/c/f=101/58/0 sis=102) [1] r=0 lpr=102 pi=[57,102)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:57 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb  2 04:42:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Feb  2 04:42:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:57 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:42:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb  2 04:42:58 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Feb  2 04:42:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 04:42:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb  2 04:42:58 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb  2 04:42:58 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 104 pg[9.12( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=4 ec=57/38 lis/c=102/57 les/c/f=103/58/0 sis=104) [1] r=0 lpr=104 pi=[57,104)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:42:58 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 104 pg[9.12( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=4 ec=57/38 lis/c=102/57 les/c/f=103/58/0 sis=104) [1] r=0 lpr=104 pi=[57,104)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:42:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:42:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:42:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:42:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb  2 04:42:59 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 105 pg[9.12( v 44'1041 (0'0,44'1041] local-lis/les=104/105 n=4 ec=57/38 lis/c=102/57 les/c/f=103/58/0 sis=104) [1] r=0 lpr=104 pi=[57,104)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:42:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:42:59 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:42:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:42:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:42:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:42:59.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:42:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb  2 04:42:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Feb  2 04:43:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:00 np0005604790 python3.9[107499]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:43:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb  2 04:43:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:00.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 04:43:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb  2 04:43:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb  2 04:43:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:00 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Feb  2 04:43:00 np0005604790 python3.9[107649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:43:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:43:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb  2 04:43:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:01 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:01.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v57: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Feb  2 04:43:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb  2 04:43:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Feb  2 04:43:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:43:02 np0005604790 python3.9[107805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:43:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:02.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb  2 04:43:02 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb  2 04:43:02 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=107) [1] r=0 lpr=107 pi=[73,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:03 np0005604790 python3.9[107963]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:43:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb  2 04:43:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb  2 04:43:03 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb  2 04:43:03 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[73,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:03 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 108 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=73/73 les/c/f=74/74/0 sis=108) [1]/[2] r=-1 lpr=108 pi=[73,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:03 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb  2 04:43:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094303 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:43:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:03 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v60: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 226 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:43:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:04 np0005604790 python3.9[108049]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:43:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb  2 04:43:04 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb  2 04:43:04 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb  2 04:43:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:04] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:43:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:04] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:43:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb  2 04:43:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb  2 04:43:05 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb  2 04:43:05 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 110 pg[9.15( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=4 ec=57/38 lis/c=108/73 les/c/f=109/74/0 sis=110) [1] r=0 lpr=110 pi=[73,110)/1 luod=0'0 crt=44'1041 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:05 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 110 pg[9.15( v 44'1041 (0'0,44'1041] local-lis/les=0/0 n=4 ec=57/38 lis/c=108/73 les/c/f=109/74/0 sis=110) [1] r=0 lpr=110 pi=[73,110)/1 crt=44'1041 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:05 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:05.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:43:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:43:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:06.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb  2 04:43:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb  2 04:43:06 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb  2 04:43:06 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 111 pg[9.15( v 44'1041 (0'0,44'1041] local-lis/les=110/111 n=4 ec=57/38 lis/c=108/73 les/c/f=109/74/0 sis=110) [1] r=0 lpr=110 pi=[73,110)/1 crt=44'1041 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:07 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:07.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:43:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:09 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:09.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Feb  2 04:43:09 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb  2 04:43:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:10 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 112 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=4 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=112 pruub=15.463147163s) [2] r=-1 lpr=112 pi=[74,112)/1 crt=44'1041 mlcod 0'0 active pruub 255.635910034s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:10 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 112 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=4 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=112 pruub=15.462944031s) [2] r=-1 lpr=112 pi=[74,112)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 255.635910034s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb  2 04:43:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb  2 04:43:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb  2 04:43:11 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb  2 04:43:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 113 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=4 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=113) [2]/[1] r=0 lpr=113 pi=[74,113)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:11 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 113 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=4 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=113) [2]/[1] r=0 lpr=113 pi=[74,113)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:43:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:11 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:11.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Feb  2 04:43:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb  2 04:43:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Feb  2 04:43:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb  2 04:43:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 04:43:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb  2 04:43:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb  2 04:43:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Feb  2 04:43:12 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 114 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=113/114 n=4 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=113) [2]/[1] async=[2] r=0 lpr=113 pi=[74,113)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:12.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb  2 04:43:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 115 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=113/114 n=4 ec=57/38 lis/c=113/74 les/c/f=114/75/0 sis=115 pruub=14.340683937s) [2] async=[2] r=-1 lpr=115 pi=[74,115)/1 crt=44'1041 mlcod 44'1041 active pruub 257.355560303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:13 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 115 pg[9.16( v 44'1041 (0'0,44'1041] local-lis/les=113/114 n=4 ec=57/38 lis/c=113/74 les/c/f=114/75/0 sis=115 pruub=14.340615273s) [2] r=-1 lpr=115 pi=[74,115)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 257.355560303s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:13 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v72: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb  2 04:43:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Feb  2 04:43:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:14.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c64001670 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Feb  2 04:43:14 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Feb  2 04:43:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 04:43:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Feb  2 04:43:14 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Feb  2 04:43:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:43:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:43:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb  2 04:43:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:15 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 209 B/s rd, 0 op/s; 22 B/s, 0 objects/s recovering
Feb  2 04:43:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb  2 04:43:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Feb  2 04:43:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:15.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:43:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:16.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Feb  2 04:43:16 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:43:17
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.nfs', '.rgw.root', '.mgr', 'images', 'vms']
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Feb  2 04:43:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb  2 04:43:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Feb  2 04:43:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004850 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Feb  2 04:43:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 04:43:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Feb  2 04:43:19 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:42460] [POST] [200] [0.153s] [4.0B] [3bd48467-3cda-4bed-821e-fd51372f8c41] /api/prometheus_receiver
Feb  2 04:43:19 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Feb  2 04:43:19 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Feb  2 04:43:19 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102936745s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 active pruub 263.633850098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:19 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:19 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58003380 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 397 B/s rd, 0 op/s
Feb  2 04:43:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:19.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Feb  2 04:43:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb  2 04:43:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Feb  2 04:43:20 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Feb  2 04:43:20 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:20 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:20.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004870 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Feb  2 04:43:21 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Feb  2 04:43:21 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.928081512s) [0] async=[0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 44'1041 active pruub 266.560791016s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:21 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:21 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Feb  2 04:43:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:21 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:43:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58004590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Feb  2 04:43:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:22.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58004590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Feb  2 04:43:22 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Feb  2 04:43:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:23 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004890 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 54 B/s, 2 objects/s recovering
Feb  2 04:43:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb  2 04:43:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Feb  2 04:43:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:23.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003cc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Feb  2 04:43:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Feb  2 04:43:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 04:43:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Feb  2 04:43:24 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Feb  2 04:43:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58004590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:24] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb  2 04:43:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:24] "GET /metrics HTTP/1.1" 200 48323 "" "Prometheus/2.51.0"
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Feb  2 04:43:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:25 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58004590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 224 B/s rd, 0 op/s; 48 B/s, 2 objects/s recovering
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb  2 04:43:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Feb  2 04:43:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:25.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:26.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003ce0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Feb  2 04:43:26 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb  2 04:43:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:27 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003ce0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb  2 04:43:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Feb  2 04:43:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003ce0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003ce0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Feb  2 04:43:28 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Feb  2 04:43:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 04:43:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Feb  2 04:43:28 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Feb  2 04:43:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:28.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:43:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:28.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:43:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:43:28 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703561783s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 active pruub 266.976745605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:28 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Feb  2 04:43:29 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb  2 04:43:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Feb  2 04:43:29 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Feb  2 04:43:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:29 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Feb  2 04:43:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:29.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Feb  2 04:43:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Feb  2 04:43:30 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Feb  2 04:43:30 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb  2 04:43:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb  2 04:43:31 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb  2 04:43:31 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420552254s) [2] async=[2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 44'1041 active pruub 276.060241699s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:31 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:31 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v97: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 27 B/s, 0 objects/s recovering
Feb  2 04:43:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:31.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:43:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:43:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb  2 04:43:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb  2 04:43:32 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb  2 04:43:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000055s ======
Feb  2 04:43:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Feb  2 04:43:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094333 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:43:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:33 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Feb  2 04:43:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:33.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Feb  2 04:43:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360290527s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 active pruub 271.645385742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:33 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:33 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb  2 04:43:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:34] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:43:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:34] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:43:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb  2 04:43:34 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb  2 04:43:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb  2 04:43:35 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb  2 04:43:35 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:35 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:35 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 1 objects/s recovering
Feb  2 04:43:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb  2 04:43:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:43:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299188614s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 active pruub 276.622467041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb  2 04:43:36 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670845032s) [0] async=[0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 44'1041 active pruub 281.347961426s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:36 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb  2 04:43:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb  2 04:43:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb  2 04:43:37 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb  2 04:43:37 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 04:43:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:37 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:43:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:37.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb  2 04:43:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb  2 04:43:38 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb  2 04:43:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984783173s) [0] async=[0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 44'1041 active pruub 282.670013428s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 04:43:38 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 04:43:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:43:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb  2 04:43:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb  2 04:43:39 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb  2 04:43:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v109: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 0 op/s; 28 B/s, 2 objects/s recovering
Feb  2 04:43:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:39.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:41 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 1 active+remapped, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 369 B/s rd, 0 op/s; 19 B/s, 1 objects/s recovering
Feb  2 04:43:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:43:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54002830 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:43 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s; 16 B/s, 1 objects/s recovering
Feb  2 04:43:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:43:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:43:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:44.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700048f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:44] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:43:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:44] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:43:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:43:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:43:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 2 op/s; 13 B/s, 1 objects/s recovering
Feb  2 04:43:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:45.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:46.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:43:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f388ab74760>)]
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f388ab749a0>)]
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb  2 04:43:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:47 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004910 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 752 B/s wr, 2 op/s
Feb  2 04:43:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:43:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:48.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:48.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:43:49 np0005604790 ceph-mon[74489]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.djvyfo(active, since 92s), standbys: compute-2.gzlyac, compute-1.teascl
Feb  2 04:43:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:49 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Feb  2 04:43:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004930 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:51 np0005604790 python3.9[108455]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:43:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:51 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Feb  2 04:43:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:51.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:52.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004950 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:53 np0005604790 python3.9[108761]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb  2 04:43:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094353 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:43:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:53 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Feb  2 04:43:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:53.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:54 np0005604790 podman[108910]: 2026-02-02 09:43:54.109381045 +0000 UTC m=+0.089865360 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:43:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:54 np0005604790 podman[108910]: 2026-02-02 09:43:54.254045094 +0000 UTC m=+0.234529319 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:43:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:54 np0005604790 python3.9[109092]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb  2 04:43:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:54] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:43:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:43:54] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:43:54 np0005604790 podman[109200]: 2026-02-02 09:43:54.90440452 +0000 UTC m=+0.073089240 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:43:54 np0005604790 podman[109200]: 2026-02-02 09:43:54.914971014 +0000 UTC m=+0.083655704 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:43:55 np0005604790 podman[109361]: 2026-02-02 09:43:55.094276621 +0000 UTC m=+0.048983924 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:55 np0005604790 podman[109361]: 2026-02-02 09:43:55.102916602 +0000 UTC m=+0.057623905 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:55 np0005604790 python3.9[109424]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:43:55 np0005604790 podman[109466]: 2026-02-02 09:43:55.390633966 +0000 UTC m=+0.051740708 container exec 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:43:55 np0005604790 podman[109466]: 2026-02-02 09:43:55.402968967 +0000 UTC m=+0.064075689 container exec_died 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:43:55 np0005604790 podman[109625]: 2026-02-02 09:43:55.715543278 +0000 UTC m=+0.070282426 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git)
Feb  2 04:43:55 np0005604790 podman[109625]: 2026-02-02 09:43:55.779337008 +0000 UTC m=+0.134076126 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Feb  2 04:43:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:55 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:43:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:56 np0005604790 podman[109765]: 2026-02-02 09:43:56.038088985 +0000 UTC m=+0.063543464 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:56 np0005604790 podman[109765]: 2026-02-02 09:43:56.076027342 +0000 UTC m=+0.101481831 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:56 np0005604790 python3.9[109760]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb  2 04:43:56 np0005604790 podman[109861]: 2026-02-02 09:43:56.310610862 +0000 UTC m=+0.061634124 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:43:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:43:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:56 np0005604790 podman[109861]: 2026-02-02 09:43:56.50598638 +0000 UTC m=+0.257009592 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:43:56 np0005604790 podman[109956]: 2026-02-02 09:43:56.784046285 +0000 UTC m=+0.052785606 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:56 np0005604790 podman[109956]: 2026-02-02 09:43:56.842941144 +0000 UTC m=+0.111680455 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:43:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:43:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:43:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:43:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 697 B/s wr, 2 op/s
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:57 np0005604790 python3.9[110191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:43:57 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:43:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:57 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:43:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:43:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70004990 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:58 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:43:58 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:58 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:43:58 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.306378989 +0000 UTC m=+0.079446591 container create a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:43:58 np0005604790 systemd[1]: Started libpod-conmon-a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9.scope.
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.265955915 +0000 UTC m=+0.039023567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:43:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.402073875 +0000 UTC m=+0.175141467 container init a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.4093615 +0000 UTC m=+0.182429102 container start a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:43:58 np0005604790 python3.9[110452]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:43:58 np0005604790 elegant_cerf[110469]: 167 167
Feb  2 04:43:58 np0005604790 systemd[1]: libpod-a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9.scope: Deactivated successfully.
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.416905512 +0000 UTC m=+0.189973104 container attach a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.417350234 +0000 UTC m=+0.190417806 container died a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:43:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-73ff46477aee23eed888aa06aa9b9f248a946d1cdd18021704e93c671a88769e-merged.mount: Deactivated successfully.
Feb  2 04:43:58 np0005604790 podman[110453]: 2026-02-02 09:43:58.456788241 +0000 UTC m=+0.229855853 container remove a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:43:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:43:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:43:58.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:43:58 np0005604790 systemd[1]: libpod-conmon-a2bb52f2e995fb7e2da37d250b0d04aad01b553ed81d4540cfe1a7a89b2f95c9.scope: Deactivated successfully.
Feb  2 04:43:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54003a20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:58 np0005604790 podman[110518]: 2026-02-02 09:43:58.610105952 +0000 UTC m=+0.064239493 container create b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 04:43:58 np0005604790 systemd[1]: Started libpod-conmon-b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598.scope.
Feb  2 04:43:58 np0005604790 podman[110518]: 2026-02-02 09:43:58.580708504 +0000 UTC m=+0.034842125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:43:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:43:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:43:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:43:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:43:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:43:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:43:58 np0005604790 podman[110518]: 2026-02-02 09:43:58.722189887 +0000 UTC m=+0.176323538 container init b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:43:58 np0005604790 podman[110518]: 2026-02-02 09:43:58.736007438 +0000 UTC m=+0.190141019 container start b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:43:58 np0005604790 podman[110518]: 2026-02-02 09:43:58.74058748 +0000 UTC m=+0.194721041 container attach b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:43:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:58.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:43:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:58.851Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:43:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:43:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:43:58 np0005604790 python3.9[110589]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:43:59 np0005604790 amazing_saha[110579]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:43:59 np0005604790 amazing_saha[110579]: --> All data devices are unavailable
Feb  2 04:43:59 np0005604790 systemd[1]: libpod-b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598.scope: Deactivated successfully.
Feb  2 04:43:59 np0005604790 conmon[110579]: conmon b6c0dda253c395706d93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598.scope/container/memory.events
Feb  2 04:43:59 np0005604790 podman[110518]: 2026-02-02 09:43:59.058015721 +0000 UTC m=+0.512149252 container died b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:43:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ad4f083082af11529046149c88ce3ac6b5f7592d9b70a4a988269b1cf05edfb1-merged.mount: Deactivated successfully.
Feb  2 04:43:59 np0005604790 podman[110518]: 2026-02-02 09:43:59.102233416 +0000 UTC m=+0.556366947 container remove b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_saha, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 04:43:59 np0005604790 systemd[1]: libpod-conmon-b6c0dda253c395706d93a3aa4854d73618fc4c5924c72d210c685ec7d40bc598.scope: Deactivated successfully.
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.653301741 +0000 UTC m=+0.040625530 container create 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:43:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 697 B/s wr, 2 op/s
Feb  2 04:43:59 np0005604790 systemd[1]: Started libpod-conmon-1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6.scope.
Feb  2 04:43:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.727455099 +0000 UTC m=+0.114778908 container init 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.636019278 +0000 UTC m=+0.023343037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.734899049 +0000 UTC m=+0.122222828 container start 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:43:59 np0005604790 great_shannon[110748]: 167 167
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.740851728 +0000 UTC m=+0.128175537 container attach 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:43:59 np0005604790 systemd[1]: libpod-1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6.scope: Deactivated successfully.
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.741691431 +0000 UTC m=+0.129015200 container died 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:43:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1b73f80916691e81a9dc6224cef6897f8a8d88afefd9a23d59f12b96f02ebbfe-merged.mount: Deactivated successfully.
Feb  2 04:43:59 np0005604790 podman[110731]: 2026-02-02 09:43:59.779411032 +0000 UTC m=+0.166734801 container remove 1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:43:59 np0005604790 systemd[1]: libpod-conmon-1ee4853d10aef7ea5826f341f763e377930e21dd02bad40081869982be633db6.scope: Deactivated successfully.
Feb  2 04:43:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:43:59 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:43:59 np0005604790 podman[110796]: 2026-02-02 09:43:59.939015461 +0000 UTC m=+0.055089418 container create 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:43:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:43:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:43:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:43:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:43:59 np0005604790 systemd[1]: Started libpod-conmon-0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad.scope.
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:43:59.908304058 +0000 UTC m=+0.024378025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:44:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:44:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a61b492a7633db093999408b772b8c05473e91570fd760b5726cb095314413d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a61b492a7633db093999408b772b8c05473e91570fd760b5726cb095314413d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a61b492a7633db093999408b772b8c05473e91570fd760b5726cb095314413d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a61b492a7633db093999408b772b8c05473e91570fd760b5726cb095314413d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:44:00.035676333 +0000 UTC m=+0.151750290 container init 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:44:00.043606595 +0000 UTC m=+0.159680512 container start 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:44:00.046945155 +0000 UTC m=+0.163019112 container attach 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:44:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:00 np0005604790 strange_hertz[110842]: {
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:    "1": [
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:        {
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "devices": [
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "/dev/loop3"
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            ],
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "lv_name": "ceph_lv0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "lv_size": "21470642176",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "name": "ceph_lv0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "tags": {
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.cluster_name": "ceph",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.crush_device_class": "",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.encrypted": "0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.osd_id": "1",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.type": "block",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.vdo": "0",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:                "ceph.with_tpm": "0"
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            },
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "type": "block",
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:            "vg_name": "ceph_vg0"
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:        }
Feb  2 04:44:00 np0005604790 strange_hertz[110842]:    ]
Feb  2 04:44:00 np0005604790 strange_hertz[110842]: }
Feb  2 04:44:00 np0005604790 systemd[1]: libpod-0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad.scope: Deactivated successfully.
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:44:00.397944365 +0000 UTC m=+0.514018292 container died 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:44:00 np0005604790 python3.9[110922]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:44:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4a61b492a7633db093999408b772b8c05473e91570fd760b5726cb095314413d-merged.mount: Deactivated successfully.
Feb  2 04:44:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c700049b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:00 np0005604790 podman[110796]: 2026-02-02 09:44:00.488240656 +0000 UTC m=+0.604314613 container remove 0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:44:00 np0005604790 systemd[1]: libpod-conmon-0318b3d27270ffc6627837ccd2c8b8cf9667e068d629ef6ce4bdbfe22f3400ad.scope: Deactivated successfully.
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.085235501 +0000 UTC m=+0.047098303 container create 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 04:44:01 np0005604790 systemd[1]: Started libpod-conmon-5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39.scope.
Feb  2 04:44:01 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.06541023 +0000 UTC m=+0.027273072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.16463744 +0000 UTC m=+0.126500332 container init 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.173391145 +0000 UTC m=+0.135253987 container start 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:44:01 np0005604790 xenodochial_chebyshev[111122]: 167 167
Feb  2 04:44:01 np0005604790 systemd[1]: libpod-5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39.scope: Deactivated successfully.
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.178156063 +0000 UTC m=+0.140018945 container attach 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.179948221 +0000 UTC m=+0.141811053 container died 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:44:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4703f52c9ff54f7a400385680c78a61277fffde5761c8b1d5ad5b4880069721f-merged.mount: Deactivated successfully.
Feb  2 04:44:01 np0005604790 podman[111079]: 2026-02-02 09:44:01.225113052 +0000 UTC m=+0.186975894 container remove 5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 04:44:01 np0005604790 systemd[1]: libpod-conmon-5903ea27a338ce382ac4f9bb775d1b072cd19c0707186e04e9a5986f1a742c39.scope: Deactivated successfully.
Feb  2 04:44:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:01 np0005604790 podman[111161]: 2026-02-02 09:44:01.412035363 +0000 UTC m=+0.068881878 container create d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:44:01 np0005604790 systemd[1]: Started libpod-conmon-d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a.scope.
Feb  2 04:44:01 np0005604790 podman[111161]: 2026-02-02 09:44:01.383652252 +0000 UTC m=+0.040498827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:44:01 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:44:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45095fedad61150d9b2494a0505b53977a8cfbc585b2b49b2f1712e79335c792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45095fedad61150d9b2494a0505b53977a8cfbc585b2b49b2f1712e79335c792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45095fedad61150d9b2494a0505b53977a8cfbc585b2b49b2f1712e79335c792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:01 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45095fedad61150d9b2494a0505b53977a8cfbc585b2b49b2f1712e79335c792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:44:01 np0005604790 podman[111161]: 2026-02-02 09:44:01.526659936 +0000 UTC m=+0.183506541 container init d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:44:01 np0005604790 podman[111161]: 2026-02-02 09:44:01.54246762 +0000 UTC m=+0.199314145 container start d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:44:01 np0005604790 podman[111161]: 2026-02-02 09:44:01.54655906 +0000 UTC m=+0.203405605 container attach d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:44:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 87 B/s wr, 0 op/s
Feb  2 04:44:01 np0005604790 python3.9[111246]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb  2 04:44:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:01 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:44:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:02 np0005604790 lvm[111418]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:44:02 np0005604790 lvm[111418]: VG ceph_vg0 finished
Feb  2 04:44:02 np0005604790 focused_bhabha[111212]: {}
Feb  2 04:44:02 np0005604790 systemd[1]: libpod-d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a.scope: Deactivated successfully.
Feb  2 04:44:02 np0005604790 podman[111161]: 2026-02-02 09:44:02.357961994 +0000 UTC m=+1.014808479 container died d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:44:02 np0005604790 systemd[1]: libpod-d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a.scope: Consumed 1.128s CPU time.
Feb  2 04:44:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-45095fedad61150d9b2494a0505b53977a8cfbc585b2b49b2f1712e79335c792-merged.mount: Deactivated successfully.
Feb  2 04:44:02 np0005604790 podman[111161]: 2026-02-02 09:44:02.424168299 +0000 UTC m=+1.081014794 container remove d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:44:02 np0005604790 systemd[1]: libpod-conmon-d0dbfbe6b86df9faa0d1718b33131de761d45f0be384d40dbe3094b48477854a.scope: Deactivated successfully.
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:44:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:44:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:44:02 np0005604790 python3.9[111485]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:44:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:44:03 np0005604790 python3.9[111663]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:44:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 522 B/s rd, 87 B/s wr, 0 op/s
Feb  2 04:44:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:03 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:03.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:04 np0005604790 python3.9[111817]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb  2 04:44:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:04 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:04] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Feb  2 04:44:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:04] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Feb  2 04:44:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Feb  2 04:44:05 np0005604790 python3.9[111969]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:05 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:06 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74002870 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:06 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:54840] [POST] [200] [0.002s] [4.0B] [eee5a6b2-aefa-4fd0-9f7e-1f1a87fd2875] /api/prometheus_receiver
Feb  2 04:44:07 np0005604790 python3.9[112124]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:44:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 0 op/s
Feb  2 04:44:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:07 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:07.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:08 np0005604790 python3.9[112278]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:44:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:08 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:08 np0005604790 python3.9[112356]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:44:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:08.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:44:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:08.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:44:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:08.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:09 np0005604790 python3.9[112508]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:44:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:09 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74002870 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:10 np0005604790 python3.9[112588]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:44:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:10 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:11 np0005604790 python3.9[112740]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:11 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:12 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.015208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453015288, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2754, "num_deletes": 251, "total_data_size": 5942902, "memory_usage": 6156224, "flush_reason": "Manual Compaction"}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453107592, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5479434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8085, "largest_seqno": 10838, "table_properties": {"data_size": 5466280, "index_size": 8496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 30984, "raw_average_key_size": 21, "raw_value_size": 5438333, "raw_average_value_size": 3854, "num_data_blocks": 372, "num_entries": 1411, "num_filter_entries": 1411, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025339, "oldest_key_time": 1770025339, "file_creation_time": 1770025453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 92462 microseconds, and 12278 cpu microseconds.
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.107679) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5479434 bytes OK
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.107708) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.135344) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.135421) EVENT_LOG_v1 {"time_micros": 1770025453135405, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.135459) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5930417, prev total WAL file size 5930417, number of live WAL files 2.
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.137586) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(5351KB)], [23(11MB)]
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453137646, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 17907750, "oldest_snapshot_seqno": -1}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4074 keys, 14269838 bytes, temperature: kUnknown
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453220095, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14269838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14237189, "index_size": 21339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104022, "raw_average_key_size": 25, "raw_value_size": 14157154, "raw_average_value_size": 3475, "num_data_blocks": 916, "num_entries": 4074, "num_filter_entries": 4074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770025453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.220415) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14269838 bytes
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.221972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.0 rd, 172.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.2, 11.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 4605, records dropped: 531 output_compression: NoCompression
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.222001) EVENT_LOG_v1 {"time_micros": 1770025453221986, "job": 8, "event": "compaction_finished", "compaction_time_micros": 82538, "compaction_time_cpu_micros": 39608, "output_level": 6, "num_output_files": 1, "total_output_size": 14269838, "num_input_records": 4605, "num_output_records": 4074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453222855, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025453224432, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.136854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.224580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.224589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.224592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.224596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:44:13.224599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:44:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:44:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:13 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:13 np0005604790 python3.9[112919]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:44:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:14 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:14 np0005604790 python3.9[113072]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb  2 04:44:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:14] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Feb  2 04:44:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:14] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Feb  2 04:44:15 np0005604790 python3.9[113222]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:44:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:15 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:16.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:16 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:16 np0005604790 python3.9[113376]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:44:16 np0005604790 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb  2 04:44:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:16.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:16 np0005604790 systemd[1]: tuned.service: Deactivated successfully.
Feb  2 04:44:16 np0005604790 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb  2 04:44:17 np0005604790 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:44:17
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.mgr', 'images', '.nfs', 'default.rgw.control', 'backups', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log']
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:44:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:44:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:44:17 np0005604790 systemd[1]: Started Dynamic System Tuning Daemon.
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:44:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:17 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:18 np0005604790 python3.9[113539]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb  2 04:44:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:18.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:18 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:19 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:20 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094421 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:44:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:21 np0005604790 python3.9[113693]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:44:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:21 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:22.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:22 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:22 np0005604790 python3.9[113849]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:44:23 np0005604790 systemd[1]: session-39.scope: Deactivated successfully.
Feb  2 04:44:23 np0005604790 systemd[1]: session-39.scope: Consumed 1min 4.158s CPU time.
Feb  2 04:44:23 np0005604790 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Feb  2 04:44:23 np0005604790 systemd-logind[793]: Removed session 39.
Feb  2 04:44:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:44:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:23 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004570 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=infra.usagestats t=2026-02-02T09:44:24.491864396Z level=info msg="Usage stats are ready to report"
Feb  2 04:44:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:24 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:24] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Feb  2 04:44:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:24] "GET /metrics HTTP/1.1" 200 48321 "" "Prometheus/2.51.0"
Feb  2 04:44:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:44:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:25 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:26 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:26.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:44:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:27 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:28 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:28 np0005604790 systemd-logind[793]: New session 40 of user zuul.
Feb  2 04:44:28 np0005604790 systemd[1]: Started Session 40 of User zuul.
Feb  2 04:44:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:44:29 np0005604790 python3.9[114036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:44:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:44:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:29 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:30.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:30 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c50003e10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:31 np0005604790 python3.9[114194]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb  2 04:44:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:44:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:31 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:32.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:32 np0005604790 python3.9[114373]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:44:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:44:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:44:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:32 np0005604790 python3.9[114459]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 04:44:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:44:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:32 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:44:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:44:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:33 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70001ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:34.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:34 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54001230 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:34] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:44:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:34] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:44:35 np0005604790 python3.9[114615]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:44:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:35 : epoch 6980713d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:44:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:35 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:36.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70001ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:36.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:36 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:36.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:37 np0005604790 python3.9[114770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:44:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:44:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:37 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c54001230 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:38 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:38 np0005604790 python3.9[114925]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:44:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:38.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:39 np0005604790 python3.9[115077]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb  2 04:44:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:44:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:39 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:40.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c540020f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:40 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:40 np0005604790 python3.9[115229]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:44:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094441 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:44:41 np0005604790 python3.9[115387]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:44:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:41 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:42 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c540020f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:44:43 np0005604790 python3.9[115543]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:44:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:43 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c540020f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:44.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:44.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:44 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:44] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:44:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:44] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:44:45 np0005604790 python3.9[115831]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb  2 04:44:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:44:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:45 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c540020f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:46.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:46 np0005604790 python3.9[115983]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:44:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:46 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:46.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:47 np0005604790 python3.9[116137]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:44:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:44:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:44:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:47 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:48.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:48 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:48.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:44:49 np0005604790 python3.9[116292]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:44:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:44:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:49 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:50 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:51 np0005604790 python3.9[116447]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:44:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:44:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:51 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c58002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:52.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:52 np0005604790 python3.9[116628]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb  2 04:44:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:52.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:52 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:53 np0005604790 systemd[1]: session-40.scope: Deactivated successfully.
Feb  2 04:44:53 np0005604790 systemd[1]: session-40.scope: Consumed 17.591s CPU time.
Feb  2 04:44:53 np0005604790 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Feb  2 04:44:53 np0005604790 systemd-logind[793]: Removed session 40.
Feb  2 04:44:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:44:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:53 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:54.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:54 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c48000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:54] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:44:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:44:54] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:44:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:55 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:56.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:44:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:44:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:44:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:56 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:56.958Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:44:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:56.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:44:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:44:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:57 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c480016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:44:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:44:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:44:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:44:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:44:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:44:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:44:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:58 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:44:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:44:58.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:44:58 np0005604790 systemd-logind[793]: New session 41 of user zuul.
Feb  2 04:44:58 np0005604790 systemd[1]: Started Session 41 of User zuul.
Feb  2 04:44:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:44:59 np0005604790 python3.9[116815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:44:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:44:59 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:00.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:45:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c480016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:45:00 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c74004e90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:01 np0005604790 python3.9[116970]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:45:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094501 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:45:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:45:01 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c640046f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:02.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:45:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:45:02 np0005604790 kernel: ganesha.nfsd[114375]: segfault at 50 ip 00007f2cfeec232e sp 00007f2c5f7fd210 error 4 in libntirpc.so.5.8[7f2cfeea7000+2c000] likely on CPU 4 (core 0, socket 4)
Feb  2 04:45:02 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:45:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[97467]: 02/02/2026 09:45:02 : epoch 6980713d : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2c70003580 fd 49 proxy ignored for local
Feb  2 04:45:02 np0005604790 systemd[1]: Created slice Slice /system/systemd-coredump.
Feb  2 04:45:02 np0005604790 systemd[1]: Started Process Core Dump (PID 117166/UID 0).
Feb  2 04:45:02 np0005604790 python3.9[117165]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:45:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:02.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:02 np0005604790 systemd[1]: session-41.scope: Deactivated successfully.
Feb  2 04:45:02 np0005604790 systemd[1]: session-41.scope: Consumed 2.364s CPU time.
Feb  2 04:45:02 np0005604790 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Feb  2 04:45:02 np0005604790 systemd-logind[793]: Removed session 41.
Feb  2 04:45:03 np0005604790 systemd-coredump[117167]: Process 97471 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 68:#012#0  0x00007f2cfeec232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f2cfeecc900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Feb  2 04:45:03 np0005604790 systemd[1]: systemd-coredump@0-117166-0.service: Deactivated successfully.
Feb  2 04:45:03 np0005604790 podman[117260]: 2026-02-02 09:45:03.166483185 +0000 UTC m=+0.039815580 container died 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 04:45:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-db2abf9157a619349a7106ff0239cd1f0ec34b0e136456ca92e243a98eefde34-merged.mount: Deactivated successfully.
Feb  2 04:45:03 np0005604790 podman[117260]: 2026-02-02 09:45:03.216397924 +0000 UTC m=+0.089730269 container remove 4dbbc2880b363c29701e75389ef46bb1b6a317f598aba4db2d598b0c88013bb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:45:03 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:45:03 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:45:03 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.673s CPU time.
Feb  2 04:45:03 np0005604790 podman[117360]: 2026-02-02 09:45:03.544870407 +0000 UTC m=+0.069758888 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:45:03 np0005604790 podman[117360]: 2026-02-02 09:45:03.657126285 +0000 UTC m=+0.182014756 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:45:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:04 np0005604790 podman[117501]: 2026-02-02 09:45:04.202566953 +0000 UTC m=+0.073600170 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:45:04 np0005604790 podman[117501]: 2026-02-02 09:45:04.217917991 +0000 UTC m=+0.088951138 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:45:04 np0005604790 podman[117567]: 2026-02-02 09:45:04.425454915 +0000 UTC m=+0.048373068 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:04 np0005604790 podman[117567]: 2026-02-02 09:45:04.435962855 +0000 UTC m=+0.058880968 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:04] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:04] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:04 np0005604790 podman[117703]: 2026-02-02 09:45:04.992978151 +0000 UTC m=+0.065262848 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb  2 04:45:05 np0005604790 podman[117703]: 2026-02-02 09:45:05.0308753 +0000 UTC m=+0.103159937 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Feb  2 04:45:05 np0005604790 podman[117769]: 2026-02-02 09:45:05.288431885 +0000 UTC m=+0.067676502 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:05 np0005604790 podman[117769]: 2026-02-02 09:45:05.330993698 +0000 UTC m=+0.110238315 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:05 np0005604790 podman[117846]: 2026-02-02 09:45:05.563508256 +0000 UTC m=+0.070403725 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:45:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:45:05 np0005604790 podman[117846]: 2026-02-02 09:45:05.746693632 +0000 UTC m=+0.253589061 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:45:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:06.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:06 np0005604790 podman[117942]: 2026-02-02 09:45:06.079963822 +0000 UTC m=+0.082352483 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:06 np0005604790 podman[117942]: 2026-02-02 09:45:06.114905032 +0000 UTC m=+0.117293743 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:45:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:06.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 181 B/s rd, 0 op/s
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:45:07 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check update: 3 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.658138238 +0000 UTC m=+0.068208116 container create a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:45:07 np0005604790 systemd[1]: Started libpod-conmon-a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e.scope.
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.629149917 +0000 UTC m=+0.039219845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:07 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.749516411 +0000 UTC m=+0.159586309 container init a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.757583506 +0000 UTC m=+0.167653394 container start a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:45:07 np0005604790 peaceful_chatelet[118173]: 167 167
Feb  2 04:45:07 np0005604790 systemd[1]: libpod-a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e.scope: Deactivated successfully.
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.772846132 +0000 UTC m=+0.182916080 container attach a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.773403327 +0000 UTC m=+0.183473215 container died a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 04:45:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-dd23919bf620f08efeb785803b5094d792e3769fddf19ab042f44f6e6d79b74e-merged.mount: Deactivated successfully.
Feb  2 04:45:07 np0005604790 podman[118156]: 2026-02-02 09:45:07.823662704 +0000 UTC m=+0.233732592 container remove a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:45:07 np0005604790 systemd[1]: libpod-conmon-a2245d9db0d9d8fbfe30bbc3312e759f702f6dc96dd1d4fb11146224d09d891e.scope: Deactivated successfully.
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.014835672 +0000 UTC m=+0.055911909 container create fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:45:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:08.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:08 np0005604790 systemd[1]: Started libpod-conmon-fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a.scope.
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:07.992807216 +0000 UTC m=+0.033883533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.139618034 +0000 UTC m=+0.180694271 container init fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.148355126 +0000 UTC m=+0.189431363 container start fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.156885483 +0000 UTC m=+0.197961750 container attach fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Feb  2 04:45:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094508 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:45:08 np0005604790 ceph-mon[74489]: Health check update: 3 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:45:08 np0005604790 systemd-logind[793]: New session 42 of user zuul.
Feb  2 04:45:08 np0005604790 systemd[1]: Started Session 42 of User zuul.
Feb  2 04:45:08 np0005604790 nifty_mclean[118215]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:45:08 np0005604790 nifty_mclean[118215]: --> All data devices are unavailable
Feb  2 04:45:08 np0005604790 systemd[1]: libpod-fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a.scope: Deactivated successfully.
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.531855004 +0000 UTC m=+0.572931271 container died fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:45:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:08.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:08 np0005604790 systemd[1]: var-lib-containers-storage-overlay-abdf5b547a86d15ed55e15270265e77476e38c414c46c0e49a2d29fd63ac3f4d-merged.mount: Deactivated successfully.
Feb  2 04:45:08 np0005604790 podman[118199]: 2026-02-02 09:45:08.627329196 +0000 UTC m=+0.668405453 container remove fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mclean, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:45:08 np0005604790 systemd[1]: libpod-conmon-fe2108f856720fc46b6dc6ad164e4cdb34600d8b099d1c791b720b3df4ab4e1a.scope: Deactivated successfully.
Feb  2 04:45:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:08.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 90 B/s wr, 0 op/s
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.21398723 +0000 UTC m=+0.059088854 container create b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:45:09 np0005604790 systemd[1]: Started libpod-conmon-b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da.scope.
Feb  2 04:45:09 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.18505261 +0000 UTC m=+0.030154284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.290821915 +0000 UTC m=+0.135923559 container init b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.298840138 +0000 UTC m=+0.143941762 container start b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:45:09 np0005604790 fervent_bouman[118504]: 167 167
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.303107132 +0000 UTC m=+0.148208776 container attach b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:45:09 np0005604790 systemd[1]: libpod-b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da.scope: Deactivated successfully.
Feb  2 04:45:09 np0005604790 conmon[118504]: conmon b699ee5c6f58f91ea519 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da.scope/container/memory.events
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.304288163 +0000 UTC m=+0.149389747 container died b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:45:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a91081b7cf491917045f1ef7b671c03236ee8bac18d350e71443d35bd1953db8-merged.mount: Deactivated successfully.
Feb  2 04:45:09 np0005604790 podman[118487]: 2026-02-02 09:45:09.343415335 +0000 UTC m=+0.188516949 container remove b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_bouman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:45:09 np0005604790 systemd[1]: libpod-conmon-b699ee5c6f58f91ea51986d60effe5cf0a1c3998e5d9b62e913e2e59356cc7da.scope: Deactivated successfully.
Feb  2 04:45:09 np0005604790 python3.9[118472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.520854958 +0000 UTC m=+0.063772179 container create d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:45:09 np0005604790 systemd[1]: Started libpod-conmon-d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220.scope.
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.491873996 +0000 UTC m=+0.034791277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:09 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046ec89486bbfef3053e638bba84b170889b67f6b6db7fd6f37154106de311a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046ec89486bbfef3053e638bba84b170889b67f6b6db7fd6f37154106de311a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046ec89486bbfef3053e638bba84b170889b67f6b6db7fd6f37154106de311a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046ec89486bbfef3053e638bba84b170889b67f6b6db7fd6f37154106de311a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.626368186 +0000 UTC m=+0.169285397 container init d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.643218305 +0000 UTC m=+0.186135526 container start d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.648076014 +0000 UTC m=+0.190993225 container attach d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:45:09 np0005604790 reverent_noether[118552]: {
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:    "1": [
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:        {
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "devices": [
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "/dev/loop3"
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            ],
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "lv_name": "ceph_lv0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "lv_size": "21470642176",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "name": "ceph_lv0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "tags": {
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.cluster_name": "ceph",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.crush_device_class": "",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.encrypted": "0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.osd_id": "1",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.type": "block",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.vdo": "0",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:                "ceph.with_tpm": "0"
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            },
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "type": "block",
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:            "vg_name": "ceph_vg0"
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:        }
Feb  2 04:45:09 np0005604790 reverent_noether[118552]:    ]
Feb  2 04:45:09 np0005604790 reverent_noether[118552]: }
Feb  2 04:45:09 np0005604790 systemd[1]: libpod-d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220.scope: Deactivated successfully.
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.893423054 +0000 UTC m=+0.436340285 container died d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 04:45:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-046ec89486bbfef3053e638bba84b170889b67f6b6db7fd6f37154106de311a8-merged.mount: Deactivated successfully.
Feb  2 04:45:09 np0005604790 podman[118534]: 2026-02-02 09:45:09.946084766 +0000 UTC m=+0.489001997 container remove d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_noether, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:45:09 np0005604790 systemd[1]: libpod-conmon-d71a8ca0ab0aafc9dab5035629793e42eeb6c4df03b86d882147839144de3220.scope: Deactivated successfully.
Feb  2 04:45:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:10.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:10 np0005604790 python3.9[118772]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.529449743 +0000 UTC m=+0.092824471 container create a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:45:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:10 np0005604790 systemd[1]: Started libpod-conmon-a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c.scope.
Feb  2 04:45:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.500320048 +0000 UTC m=+0.063694816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.605892438 +0000 UTC m=+0.169267166 container init a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.610609454 +0000 UTC m=+0.173984182 container start a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.614126207 +0000 UTC m=+0.177500925 container attach a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 04:45:10 np0005604790 wonderful_leakey[118836]: 167 167
Feb  2 04:45:10 np0005604790 systemd[1]: libpod-a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c.scope: Deactivated successfully.
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.616553382 +0000 UTC m=+0.179928070 container died a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:45:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5e535eb6cecb4c59c94ce29b07f503272c4fd377f84ffa0d0b69ad34e2d880ce-merged.mount: Deactivated successfully.
Feb  2 04:45:10 np0005604790 podman[118820]: 2026-02-02 09:45:10.665394032 +0000 UTC m=+0.228768750 container remove a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 04:45:10 np0005604790 systemd[1]: libpod-conmon-a48cd6c807accf7c8e5dba0dd4cb5b36034343789c314c64c04ecc7c8e7e571c.scope: Deactivated successfully.
Feb  2 04:45:10 np0005604790 podman[118884]: 2026-02-02 09:45:10.825046331 +0000 UTC m=+0.055270822 container create f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:45:10 np0005604790 systemd[1]: Started libpod-conmon-f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee.scope.
Feb  2 04:45:10 np0005604790 podman[118884]: 2026-02-02 09:45:10.801043692 +0000 UTC m=+0.031268274 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:45:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4befee0a706961aca607a2effc695f33cb7aaf336a3602415b7e622c3acf274b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4befee0a706961aca607a2effc695f33cb7aaf336a3602415b7e622c3acf274b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4befee0a706961aca607a2effc695f33cb7aaf336a3602415b7e622c3acf274b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:10 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4befee0a706961aca607a2effc695f33cb7aaf336a3602415b7e622c3acf274b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:10 np0005604790 podman[118884]: 2026-02-02 09:45:10.951779945 +0000 UTC m=+0.182004516 container init f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:45:10 np0005604790 podman[118884]: 2026-02-02 09:45:10.960667571 +0000 UTC m=+0.190892042 container start f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:45:10 np0005604790 podman[118884]: 2026-02-02 09:45:10.964163574 +0000 UTC m=+0.194388145 container attach f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:45:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 90 B/s wr, 0 op/s
Feb  2 04:45:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:11 np0005604790 python3.9[119033]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:45:11 np0005604790 lvm[119112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:45:11 np0005604790 lvm[119112]: VG ceph_vg0 finished
Feb  2 04:45:11 np0005604790 practical_carver[118953]: {}
Feb  2 04:45:11 np0005604790 systemd[1]: libpod-f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee.scope: Deactivated successfully.
Feb  2 04:45:11 np0005604790 systemd[1]: libpod-f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee.scope: Consumed 1.266s CPU time.
Feb  2 04:45:11 np0005604790 podman[118884]: 2026-02-02 09:45:11.786134282 +0000 UTC m=+1.016358803 container died f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:45:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4befee0a706961aca607a2effc695f33cb7aaf336a3602415b7e622c3acf274b-merged.mount: Deactivated successfully.
Feb  2 04:45:11 np0005604790 podman[118884]: 2026-02-02 09:45:11.844065244 +0000 UTC m=+1.074289765 container remove f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:45:11 np0005604790 systemd[1]: libpod-conmon-f97413f29b4cd3d5cee0d12cf455a530672223053364fa759bc5eef18bb71fee.scope: Deactivated successfully.
Feb  2 04:45:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:45:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:12.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:12 np0005604790 python3.9[119227]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:45:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:45:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094512 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:45:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [NOTICE] 032/094512 (4) : haproxy version is 2.3.17-d1c9119
Feb  2 04:45:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [NOTICE] 032/094512 (4) : path to executable is /usr/local/sbin/haproxy
Feb  2 04:45:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [ALERT] 032/094512 (4) : backend 'backend' has no server available!
Feb  2 04:45:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 363 B/s rd, 90 B/s wr, 0 op/s
Feb  2 04:45:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:13 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 1.
Feb  2 04:45:13 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:45:13 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.673s CPU time.
Feb  2 04:45:13 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:45:13 np0005604790 podman[119301]: 2026-02-02 09:45:13.732922899 +0000 UTC m=+0.032109316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:45:14 np0005604790 podman[119301]: 2026-02-02 09:45:14.063713434 +0000 UTC m=+0.362899801 container create 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:45:14 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:14 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:45:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:45:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:45:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c71bd2f24efed9992ec0b8551d31d7876a427e957e903cd05f40663750662f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c71bd2f24efed9992ec0b8551d31d7876a427e957e903cd05f40663750662f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c71bd2f24efed9992ec0b8551d31d7876a427e957e903cd05f40663750662f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c71bd2f24efed9992ec0b8551d31d7876a427e957e903cd05f40663750662f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:45:14 np0005604790 podman[119301]: 2026-02-02 09:45:14.14060806 +0000 UTC m=+0.439794467 container init 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:45:14 np0005604790 podman[119301]: 2026-02-02 09:45:14.144752591 +0000 UTC m=+0.443938958 container start 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:45:14 np0005604790 bash[119301]: 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd
Feb  2 04:45:14 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:45:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:14 np0005604790 python3.9[119510]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:45:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:14] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:14] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 272 B/s wr, 0 op/s
Feb  2 04:45:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:16 np0005604790 python3.9[119707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:16.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 272 B/s wr, 0 op/s
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:45:17
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['backups', 'images', '.nfs', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:45:17 np0005604790 python3.9[119859]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:45:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:45:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:45:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:45:18 np0005604790 python3.9[120024]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:45:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:18 np0005604790 python3.9[120102]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:18.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Feb  2 04:45:19 np0005604790 python3.9[120254]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:45:19 np0005604790 python3.9[120333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:45:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:45:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:45:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:20 np0005604790 python3.9[120486]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:45:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Feb  2 04:45:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:21 np0005604790 python3.9[120638]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:45:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:22 np0005604790 python3.9[120792]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:45:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:22 np0005604790 python3.9[120944]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:45:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 1 op/s
Feb  2 04:45:23 np0005604790 python3.9[121097]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:45:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:24.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:24] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Feb  2 04:45:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:24] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Feb  2 04:45:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 852 B/s wr, 3 op/s
Feb  2 04:45:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094525 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:45:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:26 np0005604790 python3.9[121253]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:45:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:26 np0005604790 python3.9[121421]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:45:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:26.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Feb  2 04:45:27 np0005604790 python3.9[121574]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:45:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:27 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00001c40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:28.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:28 np0005604790 python3.9[121727]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:45:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:28.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Feb  2 04:45:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:29 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:45:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:29 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:45:29 np0005604790 python3.9[121881]: ansible-service_facts Invoked
Feb  2 04:45:29 np0005604790 network[121899]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:45:29 np0005604790 network[121900]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:45:29 np0005604790 network[121901]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:45:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094530 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:45:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:45:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:45:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001bb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:45:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001bb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:45:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:45:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:45:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:45:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001bb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:34.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001bb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094534 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:45:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Feb  2 04:45:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c0020f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:36.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8001bb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:36 np0005604790 python3.9[122384]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:45:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:36.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:45:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:38.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:45:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:45:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c0020f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:38.864Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:45:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:38.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:45:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:45:39 np0005604790 python3.9[122539]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb  2 04:45:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:45:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:40.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:45:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:40.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Feb  2 04:45:41 np0005604790 python3.9[122693]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:45:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:41 np0005604790 python3.9[122771]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:42.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:42 np0005604790 python3.9[122925]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:45:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:42 np0005604790 python3.9[123003]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Feb  2 04:45:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c0091b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:44 np0005604790 python3.9[123157]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:44.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:45:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Feb  2 04:45:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:46.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:46 np0005604790 python3.9[123311]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:45:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:46.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:46.964Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:45:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:46.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:45:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:45:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:45:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:45:47 np0005604790 python3.9[123395]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:45:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:48.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:48 np0005604790 systemd[1]: session-42.scope: Deactivated successfully.
Feb  2 04:45:48 np0005604790 systemd[1]: session-42.scope: Consumed 23.081s CPU time.
Feb  2 04:45:48 np0005604790 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Feb  2 04:45:48 np0005604790 systemd-logind[793]: Removed session 42.
Feb  2 04:45:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:48.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:45:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:48.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:50.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:52.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:53 np0005604790 systemd-logind[793]: New session 43 of user zuul.
Feb  2 04:45:53 np0005604790 systemd[1]: Started Session 43 of User zuul.
Feb  2 04:45:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:54 np0005604790 python3.9[123610]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:54] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:45:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:45:54] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:45:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:45:55 np0005604790 python3.9[123762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:45:55 np0005604790 python3.9[123841]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:45:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:56.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:56 np0005604790 systemd[1]: session-43.scope: Deactivated successfully.
Feb  2 04:45:56 np0005604790 systemd[1]: session-43.scope: Consumed 1.614s CPU time.
Feb  2 04:45:56 np0005604790 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Feb  2 04:45:56 np0005604790 systemd-logind[793]: Removed session 43.
Feb  2 04:45:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df40025d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:45:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:45:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:56.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:45:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:56.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:45:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:45:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:45:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:45:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:45:58.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:45:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:45:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:45:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:45:58.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:45:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:00.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:01 np0005604790 systemd-logind[793]: New session 44 of user zuul.
Feb  2 04:46:01 np0005604790 systemd[1]: Started Session 44 of User zuul.
Feb  2 04:46:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:46:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:46:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:02 np0005604790 python3.9[124029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:46:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:04 np0005604790 python3.9[124187]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:04.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:04.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:04] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:46:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:04] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:46:04 np0005604790 python3.9[124362]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:46:05 np0005604790 systemd[93258]: Created slice User Background Tasks Slice.
Feb  2 04:46:05 np0005604790 systemd[93258]: Starting Cleanup of User's Temporary Files and Directories...
Feb  2 04:46:05 np0005604790 systemd[93258]: Finished Cleanup of User's Temporary Files and Directories.
Feb  2 04:46:05 np0005604790 python3.9[124440]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ckra48nn recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094605 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:46:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:06.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:06 np0005604790 python3.9[124595]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:06.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:06.965Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:06 np0005604790 python3.9[124673]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.45csbq9w recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:46:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:08 np0005604790 python3.9[124827]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:46:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:08.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:08.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:08 np0005604790 python3.9[124979]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:08.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:46:09 np0005604790 python3.9[125057]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:46:09 np0005604790 python3.9[125210]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:10.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:10 np0005604790 python3.9[125289]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:46:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:10.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:46:11 np0005604790 python3.9[125441]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:12.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:46:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:12.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:46:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:12 np0005604790 python3.9[125595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094612 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:46:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:46:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:14 np0005604790 python3.9[125765]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:14.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:46:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 91 B/s wr, 0 op/s
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:46:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:46:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:46:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:14 np0005604790 python3.9[125984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.812927332 +0000 UTC m=+0.042647531 container create fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:46:14 np0005604790 systemd[1]: Started libpod-conmon-fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320.scope.
Feb  2 04:46:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:14] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:46:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:14] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Feb  2 04:46:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.793169124 +0000 UTC m=+0.022889303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.887748991 +0000 UTC m=+0.117469220 container init fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.894134362 +0000 UTC m=+0.123854561 container start fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:46:14 np0005604790 angry_black[126046]: 167 167
Feb  2 04:46:14 np0005604790 systemd[1]: libpod-fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320.scope: Deactivated successfully.
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.898838428 +0000 UTC m=+0.128558627 container attach fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.899402703 +0000 UTC m=+0.129122892 container died fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:46:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ddbb18233b8d7b533d958667cb50e5a91f2d3611b67d30672f384ce05502a7f1-merged.mount: Deactivated successfully.
Feb  2 04:46:14 np0005604790 podman[126027]: 2026-02-02 09:46:14.94682082 +0000 UTC m=+0.176541009 container remove fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_black, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:46:14 np0005604790 systemd[1]: libpod-conmon-fc8032ec0345ef7e4770fb7348275e5683dc80c5c09863c1edac59861589e320.scope: Deactivated successfully.
Feb  2 04:46:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:15 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:46:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:46:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.141575764 +0000 UTC m=+0.055869434 container create e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 04:46:15 np0005604790 systemd[1]: Started libpod-conmon-e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7.scope.
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.121994031 +0000 UTC m=+0.036287691 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:15 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:15 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.252908599 +0000 UTC m=+0.167202299 container init e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.265121406 +0000 UTC m=+0.179415066 container start e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.276770107 +0000 UTC m=+0.191063837 container attach e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:46:15 np0005604790 python3.9[126149]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:15 np0005604790 vigorous_visvesvaraya[126164]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:46:15 np0005604790 vigorous_visvesvaraya[126164]: --> All data devices are unavailable
Feb  2 04:46:15 np0005604790 systemd[1]: libpod-e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7.scope: Deactivated successfully.
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.626201855 +0000 UTC m=+0.540495525 container died e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:46:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d2f355b4c3b4dac2b33a303f4eb3368c3be824923d165662bde5600affe0723b-merged.mount: Deactivated successfully.
Feb  2 04:46:15 np0005604790 podman[126147]: 2026-02-02 09:46:15.676965282 +0000 UTC m=+0.591258952 container remove e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:46:15 np0005604790 systemd[1]: libpod-conmon-e10b22d7403d15e792157b1128d0b1e6c550f52f6e5c235046b591901ca7a2f7.scope: Deactivated successfully.
Feb  2 04:46:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:16.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 364 B/s rd, 91 B/s wr, 0 op/s
Feb  2 04:46:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.306558446 +0000 UTC m=+0.053887971 container create 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:46:16 np0005604790 systemd[1]: Started libpod-conmon-9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9.scope.
Feb  2 04:46:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.377475111 +0000 UTC m=+0.124804656 container init 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.285591756 +0000 UTC m=+0.032921331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.384455598 +0000 UTC m=+0.131785123 container start 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.387751526 +0000 UTC m=+0.135081071 container attach 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:46:16 np0005604790 boring_curie[126449]: 167 167
Feb  2 04:46:16 np0005604790 systemd[1]: libpod-9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9.scope: Deactivated successfully.
Feb  2 04:46:16 np0005604790 conmon[126449]: conmon 9a3bea287a3053fa187f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9.scope/container/memory.events
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.390125709 +0000 UTC m=+0.137455324 container died 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:46:16 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2366b93749de1876852054051d87975e31124381bc442bf1fae3480745a07a38-merged.mount: Deactivated successfully.
Feb  2 04:46:16 np0005604790 podman[126432]: 2026-02-02 09:46:16.432578034 +0000 UTC m=+0.179907599 container remove 9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:46:16 np0005604790 systemd[1]: libpod-conmon-9a3bea287a3053fa187f709c4d664e5f14bf5c19f460a54ed1d0bef83b32e0d9.scope: Deactivated successfully.
Feb  2 04:46:16 np0005604790 python3.9[126434]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:46:16 np0005604790 systemd[1]: Reloading.
Feb  2 04:46:16 np0005604790 podman[126473]: 2026-02-02 09:46:16.578804801 +0000 UTC m=+0.042396954 container create 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:46:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:16.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:16 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:46:16 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:46:16 np0005604790 podman[126473]: 2026-02-02 09:46:16.559886536 +0000 UTC m=+0.023478649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:16 np0005604790 systemd[1]: Started libpod-conmon-1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764.scope.
Feb  2 04:46:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de30368e4f60e998979c1897fe92043711f9a66cf73627a9cf0a789bf5b5a39d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de30368e4f60e998979c1897fe92043711f9a66cf73627a9cf0a789bf5b5a39d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de30368e4f60e998979c1897fe92043711f9a66cf73627a9cf0a789bf5b5a39d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de30368e4f60e998979c1897fe92043711f9a66cf73627a9cf0a789bf5b5a39d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:16 np0005604790 podman[126473]: 2026-02-02 09:46:16.886787182 +0000 UTC m=+0.350379325 container init 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 04:46:16 np0005604790 podman[126473]: 2026-02-02 09:46:16.896116781 +0000 UTC m=+0.359708914 container start 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 04:46:16 np0005604790 podman[126473]: 2026-02-02 09:46:16.901072593 +0000 UTC m=+0.364664736 container attach 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Feb  2 04:46:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:16.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:46:17
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', '.nfs', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'volumes']
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]: {
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:    "1": [
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:        {
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "devices": [
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "/dev/loop3"
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            ],
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "lv_name": "ceph_lv0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "lv_size": "21470642176",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "name": "ceph_lv0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "tags": {
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.cluster_name": "ceph",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.crush_device_class": "",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.encrypted": "0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.osd_id": "1",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.type": "block",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.vdo": "0",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:                "ceph.with_tpm": "0"
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            },
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "type": "block",
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:            "vg_name": "ceph_vg0"
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:        }
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]:    ]
Feb  2 04:46:17 np0005604790 recursing_sinoussi[126526]: }
Feb  2 04:46:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:46:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:46:17 np0005604790 podman[126473]: 2026-02-02 09:46:17.189640934 +0000 UTC m=+0.653233067 container died 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:46:17 np0005604790 systemd[1]: libpod-1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764.scope: Deactivated successfully.
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-de30368e4f60e998979c1897fe92043711f9a66cf73627a9cf0a789bf5b5a39d-merged.mount: Deactivated successfully.
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:17 np0005604790 podman[126473]: 2026-02-02 09:46:17.23964166 +0000 UTC m=+0.703233763 container remove 1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:46:17 np0005604790 systemd[1]: libpod-conmon-1decb463c512b9c487c751c4d3be4ed6d400246bffe57361ae336cf0eb623764.scope: Deactivated successfully.
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:46:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:46:17 np0005604790 python3.9[126746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.803660742 +0000 UTC m=+0.037751940 container create 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:46:17 np0005604790 systemd[1]: Started libpod-conmon-20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37.scope.
Feb  2 04:46:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.787995194 +0000 UTC m=+0.022086412 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.891033727 +0000 UTC m=+0.125125025 container init 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.898926578 +0000 UTC m=+0.133017826 container start 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.903417208 +0000 UTC m=+0.137508446 container attach 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:46:17 np0005604790 systemd[1]: libpod-20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37.scope: Deactivated successfully.
Feb  2 04:46:17 np0005604790 gifted_shamir[126872]: 167 167
Feb  2 04:46:17 np0005604790 conmon[126872]: conmon 20080d069c274cfe1c02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37.scope/container/memory.events
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.905649208 +0000 UTC m=+0.139740446 container died 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:46:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-50c1678dc813ac31e57722cd7d014322d9d8813cb06700ed0fa844ca535ad4d3-merged.mount: Deactivated successfully.
Feb  2 04:46:17 np0005604790 podman[126816]: 2026-02-02 09:46:17.95362066 +0000 UTC m=+0.187711898 container remove 20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_shamir, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:46:17 np0005604790 systemd[1]: libpod-conmon-20080d069c274cfe1c0202010376a2e27c99615d4aeff904f8395ccd2d40fc37.scope: Deactivated successfully.
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0003ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:18 np0005604790 python3.9[126888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:18 np0005604790 podman[126909]: 2026-02-02 09:46:18.131720489 +0000 UTC m=+0.055874624 container create 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:46:18 np0005604790 systemd[1]: Started libpod-conmon-02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62.scope.
Feb  2 04:46:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:18.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:18 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:46:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917fd70932f0c2d72458c1024d9691602fbbf3840fad418970d605970641c3fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917fd70932f0c2d72458c1024d9691602fbbf3840fad418970d605970641c3fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917fd70932f0c2d72458c1024d9691602fbbf3840fad418970d605970641c3fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:18 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917fd70932f0c2d72458c1024d9691602fbbf3840fad418970d605970641c3fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:46:18 np0005604790 podman[126909]: 2026-02-02 09:46:18.108430327 +0000 UTC m=+0.032584502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:46:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 456 B/s wr, 1 op/s
Feb  2 04:46:18 np0005604790 podman[126909]: 2026-02-02 09:46:18.220272095 +0000 UTC m=+0.144426230 container init 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:46:18 np0005604790 podman[126909]: 2026-02-02 09:46:18.232242555 +0000 UTC m=+0.156396680 container start 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:46:18 np0005604790 podman[126909]: 2026-02-02 09:46:18.237252569 +0000 UTC m=+0.161406704 container attach 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:18.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:18 np0005604790 python3.9[127105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:18 np0005604790 lvm[127157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:46:18 np0005604790 lvm[127157]: VG ceph_vg0 finished
Feb  2 04:46:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:46:19 np0005604790 modest_clarke[126948]: {}
Feb  2 04:46:19 np0005604790 systemd[1]: libpod-02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62.scope: Deactivated successfully.
Feb  2 04:46:19 np0005604790 systemd[1]: libpod-02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62.scope: Consumed 1.059s CPU time.
Feb  2 04:46:19 np0005604790 podman[126909]: 2026-02-02 09:46:19.043908085 +0000 UTC m=+0.968062270 container died 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:46:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay-917fd70932f0c2d72458c1024d9691602fbbf3840fad418970d605970641c3fb-merged.mount: Deactivated successfully.
Feb  2 04:46:19 np0005604790 podman[126909]: 2026-02-02 09:46:19.095244007 +0000 UTC m=+1.019398132 container remove 02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:46:19 np0005604790 systemd[1]: libpod-conmon-02abeff629f29e646c5515e47e5b55e8a1fd829f9c153d1e8ec2557fd797ae62.scope: Deactivated successfully.
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:19 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:46:19 np0005604790 python3.9[127243]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:20 np0005604790 python3.9[127421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:46:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004160 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:20 np0005604790 systemd[1]: Reloading.
Feb  2 04:46:20 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:46:20 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:46:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:20.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 456 B/s wr, 1 op/s
Feb  2 04:46:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0003ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:20 np0005604790 systemd[1]: Starting Create netns directory...
Feb  2 04:46:20 np0005604790 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 04:46:20 np0005604790 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 04:46:20 np0005604790 systemd[1]: Finished Create netns directory.
Feb  2 04:46:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:46:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:20.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:46:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:21 np0005604790 python3.9[127614]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:46:21 np0005604790 network[127632]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:46:21 np0005604790 network[127633]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:46:21 np0005604790 network[127634]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:46:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:46:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:22.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 456 B/s wr, 1 op/s
Feb  2 04:46:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004180 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:22.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0003ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:46:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:46:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb  2 04:46:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80041a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:24] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Feb  2 04:46:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:24] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Feb  2 04:46:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:25 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:46:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0003ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:26 np0005604790 python3.9[127901]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:26.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:46:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:26 np0005604790 python3.9[127979]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:26.969Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:46:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:26.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:46:27 np0005604790 python3.9[128131]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094627 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:46:27 np0005604790 python3.9[128284]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80041c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0003ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:28 np0005604790 python3.9[128363]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:28.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:28.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:29 np0005604790 python3.9[128515]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb  2 04:46:29 np0005604790 systemd[1]: Starting Time & Date Service...
Feb  2 04:46:29 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:46:29 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:46:29 np0005604790 systemd[1]: Started Time & Date Service.
Feb  2 04:46:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:30.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:46:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80041c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:30 np0005604790 python3.9[128676]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:31 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:46:31 np0005604790 python3.9[128828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:31 np0005604790 python3.9[128907]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:46:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:46:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:46:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:32 np0005604790 python3.9[129060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80041e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:32 np0005604790 python3.9[129138]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4oijq0p2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:33 np0005604790 python3.9[129317]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:46:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:34 np0005604790 python3.9[129395]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:46:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:46:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094634 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:46:35 np0005604790 python3.9[129547]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:46:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004200 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Feb  2 04:46:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:36 np0005604790 python3[129702]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 04:46:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:36.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:37 np0005604790 python3.9[129854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:37 np0005604790 python3.9[129933]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:38.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Feb  2 04:46:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004220 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:38.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:38 np0005604790 python3.9[130086]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:39 np0005604790 python3.9[130211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025598.1030452-894-104791828943311/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00002740 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:46:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:46:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:46:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:40 np0005604790 python3.9[130365]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:40 np0005604790 python3.9[130443]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:41 np0005604790 python3.9[130595]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:42 np0005604790 python3.9[130675]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:46:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:42.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:43 np0005604790 python3.9[130827]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:43 np0005604790 python3.9[130905]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004260 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:44 np0005604790 python3.9[131059]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:46:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:44.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:46:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:44.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:46:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:46:44 np0005604790 python3.9[131214]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:45 np0005604790 python3.9[131367]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:46.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:46.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:46 np0005604790 python3.9[131520]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:46.971Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:46.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:46:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:46.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:46:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:46:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:46:47 np0005604790 python3.9[131672]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 04:46:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:48 np0005604790 python3.9[131826]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb  2 04:46:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:48.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:48 np0005604790 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Feb  2 04:46:48 np0005604790 systemd[1]: session-44.scope: Deactivated successfully.
Feb  2 04:46:48 np0005604790 systemd[1]: session-44.scope: Consumed 28.524s CPU time.
Feb  2 04:46:48 np0005604790 systemd-logind[793]: Removed session 44.
Feb  2 04:46:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:48.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:50.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004280 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00004410 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:52.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:46:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:54.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:46:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:46:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:54 np0005604790 systemd-logind[793]: New session 45 of user zuul.
Feb  2 04:46:54 np0005604790 systemd[1]: Started Session 45 of User zuul.
Feb  2 04:46:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:54] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:46:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:46:54] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:46:55 np0005604790 python3.9[132037]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb  2 04:46:55 np0005604790 python3.9[132190]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:46:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:56.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:46:56 np0005604790 python3.9[132345]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb  2 04:46:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:56.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:56.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:46:57 np0005604790 python3.9[132497]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.kbeh112e follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:46:57 np0005604790 python3.9[132623]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.kbeh112e mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025616.817698-102-201613623385462/.source.kbeh112e _original_basename=.tnsfd3xc follow=False checksum=0e76d40d6d80e8dcbe1329e9f4d8b9bf39ee9960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:46:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:46:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:46:58.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:46:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:46:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:46:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:46:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:46:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:46:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:46:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e00004410 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:46:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:58.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:46:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:46:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:46:59 np0005604790 python3.9[132776]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:46:59 np0005604790 systemd[1]: session-18.scope: Deactivated successfully.
Feb  2 04:46:59 np0005604790 systemd[1]: session-18.scope: Consumed 1min 29.808s CPU time.
Feb  2 04:46:59 np0005604790 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Feb  2 04:46:59 np0005604790 systemd-logind[793]: Removed session 18.
Feb  2 04:46:59 np0005604790 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 04:46:59 np0005604790 python3.9[132930]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpaaLVd9Gqbxcksz46sKNkp3Eu2TY3fUjtOhbkLQru93qJt/RNDTocNiUrE9VAj/UXp9dZqSHg1Hr7ScqXu7zqgZ9i+mq6N7P7QR+ZkN8jLQSybnPztI7X/QWaPhT0j1ArMrYk2F2Me+kAQiFL0GoR2d8udRElL8YKKIYQ6zjC/h2ZsU0WyVET9uiTgeMP/njtMzRSgO2Wp6no4KqJEOMSEY1lgURjVsMWkTr4hGz523SooA41GzquuNamnj1ELwKZSAH+TtVgI8oFJ2T+5TZiE/oW2MizbBwjKA3V5DlnGOEG49eG+LhZ/eWb6jQ7OnJARA/iLU/FsJ+CaGSbRK20/OWXP4JSZu7liaD0DIHM0DwrjEnQcXI6SbfAoAQ494KFtZvFamem7CPtrVhgNAKqybRbDcEQGpDxQgrWeA3m4HyGIBym+IvMUfYlNke9frCkwNpXRH93TK6E/ziPFrBHKkdRcFxVdsG2u1Y+adxOQk7KCjq/skzXBPCPDaHnzBM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIKtQmhiX/LRkxZONUn47u07V1HNePVW1EWKmTbmuGuY#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE0cPV3BwiB9Cc5Ne48bCCSZwMzF/hH7iFXwAiP/TK2pzWYsdZw1mOSJ+vDu1KclkDtQKmwN6Cu0N7j7domqlzE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXvxaVTYbHTHv+9EzKdF3T8+Yr2otW2YLuSqNTF+yJaKACfB7wDlIhKDGTHiU1FDrkO4tJ+R3OL/2ZXoIlxp5JSdCgcb42X+5PTj1wPkayVlQW7e0wQvT3kYhrcPtjLgk4T39/sionMGYUat45idwoB6hUSPLdk/L5+n0/3LEg1lByOM/B1/p8wGzHn6H9CWoIP3Ctd6lmrxtIVU1u+pxiBVQCcMjw5gtqsB54l670fL7El5XEkqjRjKHhylw9QTYN3AWMKuQKwcjClm/57/SoFMP7o52r653wGDH9cpvDgs0RYG4bA1mGY5OMkYbDJfcy0CViKEu5qWW4cTBLh/Z88D2EuNlINj3Q1YJk3RwF6vYl31MMsbBW10YhIiBJrA5XF0BLARqBOZ1e6v7JKTSwa7wGGtRzEzbY+me9zl6ZhhDru/I+h24J4MeBA07HvQIS2v8O95tPz76YZJ3DkWlywFWbALG8M4+fkpuQtvVpBZMgdvIWW0kfXO/grGnrgY8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3OEs+fDFWrKRKifY4uXYtOpS/6/8E88qPQNs1apj/z#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFy9hRh0QDNcy30491f4FwmL+9BopSuPxbkVyWhY9VytT/FG5rm9/DLYyukpd9IKttcZyerq0gzfokDrht76FB4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTA16t8OsOL4s99BOiNF3vckRPwnc9DwrgEMUjNAF5ofBbR7O7JlFD47GnI33lZr51vVc0wnvTxhpFA0jVvhKqVWdJ3lApNf34bJmaJBr8uiy/i3Q84MsUtXBLQ0FDCbwgaPnreNbMz3ae+u9H+Z73jQSP+gnQ5oYWhONHgO4HHkF8K7a8Bow3H5qwfbHz8o7mFQmTpYHwOcwhA53BTbh1NiEJZJNSg7wi1hH7vELUAzts1cbF2slTE0nh8XjMogq9ukokrCIKfE+xX7PmAawCuMnfvGX93zF1298pGcUKqvpnIfUOMDGtJtYEZ8sWsr5aH1YXIoJfHuux/YosRx3XDD5oEcpX0nYKVW6bumHsFIS199XAM5LtWWNr2eMcrbZhVwHNdELC6zoL7QjbBQ+2j/+8nJLq9vIghewgO3EFWK3r7kIVQZg8GYLZ/yisH4cvzUTACRXAF+1o2rq+AUfX3nTSsrqyZQUwlnWpc1vsceEO0Lsuac5tvGylnsJBfmM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN317jbKb2FNELHPgcKtyDLq5kCgCZN/b/8qYDuirt4l#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNpgfrlTfGut7rGFnGEpIiXrs2U1SQK0Fr1bAmmw8notvdnn6jtGfPfwX96hGwcOu4AlAS/i7X7XgbLw573Ooww=#012 create=True mode=0644 path=/tmp/ansible.kbeh112e state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:47:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:47:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0000ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:00 np0005604790 python3.9[133085]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kbeh112e' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:47:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:01 np0005604790 python3.9[133239]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kbeh112e state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:02 np0005604790 systemd[1]: session-45.scope: Deactivated successfully.
Feb  2 04:47:02 np0005604790 systemd[1]: session-45.scope: Consumed 4.925s CPU time.
Feb  2 04:47:02 np0005604790 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Feb  2 04:47:02 np0005604790 systemd-logind[793]: Removed session 45.
Feb  2 04:47:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de0000ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:47:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:47:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:02.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:02.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:47:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:04.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:04] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:04] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:06.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:06.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:07 np0005604790 systemd-logind[793]: New session 46 of user zuul.
Feb  2 04:47:07 np0005604790 systemd[1]: Started Session 46 of User zuul.
Feb  2 04:47:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:08.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:08 np0005604790 python3.9[133427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:47:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:08.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:09 np0005604790 python3.9[133583]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 04:47:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c009ad0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:10 np0005604790 python3.9[133739]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:47:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:12 np0005604790 python3.9[133894]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:47:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000054s ======
Feb  2 04:47:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Feb  2 04:47:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:13 np0005604790 python3.9[134047]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:47:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:47:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 50 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000055s ======
Feb  2 04:47:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:14.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Feb  2 04:47:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:14 np0005604790 python3.9[134228]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:14] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:15 np0005604790 systemd[1]: session-46.scope: Deactivated successfully.
Feb  2 04:47:15 np0005604790 systemd[1]: session-46.scope: Consumed 4.061s CPU time.
Feb  2 04:47:15 np0005604790 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Feb  2 04:47:15 np0005604790 systemd-logind[793]: Removed session 46.
Feb  2 04:47:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.409007) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636409143, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1730, "num_deletes": 250, "total_data_size": 3508020, "memory_usage": 3570768, "flush_reason": "Manual Compaction"}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636430884, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1997941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10839, "largest_seqno": 12568, "table_properties": {"data_size": 1992244, "index_size": 2837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14181, "raw_average_key_size": 20, "raw_value_size": 1979844, "raw_average_value_size": 2800, "num_data_blocks": 126, "num_entries": 707, "num_filter_entries": 707, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025454, "oldest_key_time": 1770025454, "file_creation_time": 1770025636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21923 microseconds, and 7680 cpu microseconds.
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.430971) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1997941 bytes OK
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.431007) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.432884) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.432910) EVENT_LOG_v1 {"time_micros": 1770025636432902, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.432943) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3500952, prev total WAL file size 3500952, number of live WAL files 2.
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.434114) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1951KB)], [26(13MB)]
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636434197, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16267779, "oldest_snapshot_seqno": -1}
Feb  2 04:47:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4354 keys, 14293562 bytes, temperature: kUnknown
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636567719, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14293562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14260403, "index_size": 21145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 110094, "raw_average_key_size": 25, "raw_value_size": 14176854, "raw_average_value_size": 3256, "num_data_blocks": 906, "num_entries": 4354, "num_filter_entries": 4354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770025636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.568039) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14293562 bytes
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.569605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.7 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.6 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(15.3) write-amplify(7.2) OK, records in: 4781, records dropped: 427 output_compression: NoCompression
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.569639) EVENT_LOG_v1 {"time_micros": 1770025636569623, "job": 10, "event": "compaction_finished", "compaction_time_micros": 133619, "compaction_time_cpu_micros": 41099, "output_level": 6, "num_output_files": 1, "total_output_size": 14293562, "num_input_records": 4781, "num_output_records": 4354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636570124, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025636572811, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.433976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.572857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.572861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.572863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.572864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:16.572865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:16.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:16.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:47:17
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.nfs', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:47:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:47:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:47:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:47:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:18.878Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:47:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:47:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:20 np0005604790 systemd-logind[793]: New session 47 of user zuul.
Feb  2 04:47:20 np0005604790 systemd[1]: Started Session 47 of User zuul.
Feb  2 04:47:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 306 B/s rd, 0 op/s
Feb  2 04:47:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:47:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:47:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:20 np0005604790 podman[134541]: 2026-02-02 09:47:20.893679804 +0000 UTC m=+0.046484184 container create 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:47:20 np0005604790 systemd[1]: Started libpod-conmon-0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683.scope.
Feb  2 04:47:20 np0005604790 podman[134541]: 2026-02-02 09:47:20.872895084 +0000 UTC m=+0.025699494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:20 np0005604790 podman[134541]: 2026-02-02 09:47:20.989464247 +0000 UTC m=+0.142268697 container init 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:47:20 np0005604790 podman[134541]: 2026-02-02 09:47:20.998273995 +0000 UTC m=+0.151078395 container start 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:47:21 np0005604790 podman[134541]: 2026-02-02 09:47:21.002623982 +0000 UTC m=+0.155428442 container attach 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:47:21 np0005604790 confident_moore[134604]: 167 167
Feb  2 04:47:21 np0005604790 systemd[1]: libpod-0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683.scope: Deactivated successfully.
Feb  2 04:47:21 np0005604790 podman[134541]: 2026-02-02 09:47:21.007292938 +0000 UTC m=+0.160097338 container died 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:47:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4418865b7188869ee2f58ad91c7d1a69f9cb7ded2d57b82e908800859aba5f9b-merged.mount: Deactivated successfully.
Feb  2 04:47:21 np0005604790 podman[134541]: 2026-02-02 09:47:21.054007627 +0000 UTC m=+0.206812037 container remove 0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:47:21 np0005604790 systemd[1]: libpod-conmon-0cdc12b868aaf61224f93e480bfd27e19af50c05c36b4a5c60e066abe315e683.scope: Deactivated successfully.
Feb  2 04:47:21 np0005604790 python3.9[134605]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.25807893 +0000 UTC m=+0.084508100 container create 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:47:21 np0005604790 systemd[1]: Started libpod-conmon-32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c.scope.
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.236039956 +0000 UTC m=+0.062469136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.372072784 +0000 UTC m=+0.198502014 container init 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.387037417 +0000 UTC m=+0.213466587 container start 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.39234094 +0000 UTC m=+0.218770150 container attach 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:47:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:21 np0005604790 amazing_feistel[134649]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:47:21 np0005604790 amazing_feistel[134649]: --> All data devices are unavailable
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.758153973 +0000 UTC m=+0.584583143 container died 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:47:21 np0005604790 systemd[1]: libpod-32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c.scope: Deactivated successfully.
Feb  2 04:47:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5c4efd7ba100ec36550e017a9619f2e699ea6190a6a962f608d8c593d8a7ed9d-merged.mount: Deactivated successfully.
Feb  2 04:47:21 np0005604790 podman[134629]: 2026-02-02 09:47:21.813462164 +0000 UTC m=+0.639891294 container remove 32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:47:21 np0005604790 systemd[1]: libpod-conmon-32efb9d9cd7589b3b5919ee9f072ccc6a99a9b644d6d36bdab3ed704811cf89c.scope: Deactivated successfully.
Feb  2 04:47:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Feb  2 04:47:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:22.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.489185534 +0000 UTC m=+0.049972949 container create 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:47:22 np0005604790 python3.9[134886]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:47:22 np0005604790 systemd[1]: Started libpod-conmon-006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4.scope.
Feb  2 04:47:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.460407098 +0000 UTC m=+0.021194603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.565885542 +0000 UTC m=+0.126672997 container init 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.575539842 +0000 UTC m=+0.136327297 container start 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.579977562 +0000 UTC m=+0.140764987 container attach 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:47:22 np0005604790 vigorous_wing[134942]: 167 167
Feb  2 04:47:22 np0005604790 systemd[1]: libpod-006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4.scope: Deactivated successfully.
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.585390718 +0000 UTC m=+0.146178173 container died 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 04:47:22 np0005604790 systemd[1]: var-lib-containers-storage-overlay-056c6cdac46f90782169f51dddddeb5445ef8fb2978222ce17278a240f10bd40-merged.mount: Deactivated successfully.
Feb  2 04:47:22 np0005604790 podman[134922]: 2026-02-02 09:47:22.646436914 +0000 UTC m=+0.207224369 container remove 006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:47:22 np0005604790 systemd[1]: libpod-conmon-006defde8771e8006118764ddc897e3cb9a06ab27438c3cc12098c1b1306c1d4.scope: Deactivated successfully.
Feb  2 04:47:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:22.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00a960 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:22 np0005604790 podman[134968]: 2026-02-02 09:47:22.830665961 +0000 UTC m=+0.067655575 container create 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:47:22 np0005604790 systemd[1]: Started libpod-conmon-3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618.scope.
Feb  2 04:47:22 np0005604790 podman[134968]: 2026-02-02 09:47:22.806297104 +0000 UTC m=+0.043286768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a75238d40035ec9d2590dfdafdff2b35420a1cfc30795142d2f846c76219ad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a75238d40035ec9d2590dfdafdff2b35420a1cfc30795142d2f846c76219ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a75238d40035ec9d2590dfdafdff2b35420a1cfc30795142d2f846c76219ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a75238d40035ec9d2590dfdafdff2b35420a1cfc30795142d2f846c76219ad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:22 np0005604790 podman[134968]: 2026-02-02 09:47:22.933417332 +0000 UTC m=+0.170407006 container init 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:47:22 np0005604790 podman[134968]: 2026-02-02 09:47:22.948584141 +0000 UTC m=+0.185573755 container start 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:47:22 np0005604790 podman[134968]: 2026-02-02 09:47:22.95299379 +0000 UTC m=+0.189983424 container attach 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]: {
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:    "1": [
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:        {
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "devices": [
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "/dev/loop3"
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            ],
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "lv_name": "ceph_lv0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "lv_size": "21470642176",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "name": "ceph_lv0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "tags": {
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.cluster_name": "ceph",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.crush_device_class": "",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.encrypted": "0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.osd_id": "1",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.type": "block",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.vdo": "0",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:                "ceph.with_tpm": "0"
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            },
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "type": "block",
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:            "vg_name": "ceph_vg0"
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:        }
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]:    ]
Feb  2 04:47:23 np0005604790 mystifying_maxwell[134986]: }
Feb  2 04:47:23 np0005604790 systemd[1]: libpod-3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618.scope: Deactivated successfully.
Feb  2 04:47:23 np0005604790 podman[134968]: 2026-02-02 09:47:23.30237994 +0000 UTC m=+0.539369564 container died 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:47:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6a75238d40035ec9d2590dfdafdff2b35420a1cfc30795142d2f846c76219ad0-merged.mount: Deactivated successfully.
Feb  2 04:47:23 np0005604790 podman[134968]: 2026-02-02 09:47:23.356540311 +0000 UTC m=+0.593529935 container remove 3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:47:23 np0005604790 systemd[1]: libpod-conmon-3664b2d39cede2cbd757866ca1b41ef039b3d2fb68164ce838b0fb9f92169618.scope: Deactivated successfully.
Feb  2 04:47:23 np0005604790 python3.9[135070]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb  2 04:47:23 np0005604790 podman[135177]: 2026-02-02 09:47:23.978332066 +0000 UTC m=+0.048075847 container create 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:47:24 np0005604790 systemd[1]: Started libpod-conmon-74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2.scope.
Feb  2 04:47:24 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:23.962202582 +0000 UTC m=+0.031946333 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:24.066195446 +0000 UTC m=+0.135939267 container init 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:24.074046107 +0000 UTC m=+0.143789888 container start 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:24.078416555 +0000 UTC m=+0.148160326 container attach 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:47:24 np0005604790 charming_torvalds[135194]: 167 167
Feb  2 04:47:24 np0005604790 systemd[1]: libpod-74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2.scope: Deactivated successfully.
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:24.081889889 +0000 UTC m=+0.151633670 container died 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:47:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:24 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c2e6648e6395678eb5c8ec01e62d002db1616111a7be8c744b95caae3585867c-merged.mount: Deactivated successfully.
Feb  2 04:47:24 np0005604790 podman[135177]: 2026-02-02 09:47:24.13201456 +0000 UTC m=+0.201758341 container remove 74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 04:47:24 np0005604790 systemd[1]: libpod-conmon-74cb3a6b8d01cc6218a73018a205cfa1e40dd13b7fcc703d5d170f8d63f3e7e2.scope: Deactivated successfully.
Feb  2 04:47:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb  2 04:47:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:24 np0005604790 podman[135219]: 2026-02-02 09:47:24.316634488 +0000 UTC m=+0.059348891 container create 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:47:24 np0005604790 systemd[1]: Started libpod-conmon-7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb.scope.
Feb  2 04:47:24 np0005604790 podman[135219]: 2026-02-02 09:47:24.289190658 +0000 UTC m=+0.031905121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:47:24 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:47:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a962e9ea4a42bab169b75c68fc20c1cb34b24e732308c5bca5683bfbc793085/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a962e9ea4a42bab169b75c68fc20c1cb34b24e732308c5bca5683bfbc793085/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a962e9ea4a42bab169b75c68fc20c1cb34b24e732308c5bca5683bfbc793085/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a962e9ea4a42bab169b75c68fc20c1cb34b24e732308c5bca5683bfbc793085/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:47:24 np0005604790 podman[135219]: 2026-02-02 09:47:24.41014398 +0000 UTC m=+0.152858423 container init 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 04:47:24 np0005604790 podman[135219]: 2026-02-02 09:47:24.422744869 +0000 UTC m=+0.165459272 container start 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:47:24 np0005604790 podman[135219]: 2026-02-02 09:47:24.426818779 +0000 UTC m=+0.169533232 container attach 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:47:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:25 np0005604790 lvm[135386]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:47:25 np0005604790 lvm[135386]: VG ceph_vg0 finished
Feb  2 04:47:25 np0005604790 lvm[135390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:47:25 np0005604790 lvm[135390]: VG ceph_vg0 finished
Feb  2 04:47:25 np0005604790 awesome_joliot[135236]: {}
Feb  2 04:47:25 np0005604790 systemd[1]: libpod-7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb.scope: Deactivated successfully.
Feb  2 04:47:25 np0005604790 systemd[1]: libpod-7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb.scope: Consumed 1.235s CPU time.
Feb  2 04:47:25 np0005604790 lvm[135393]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:47:25 np0005604790 lvm[135393]: VG ceph_vg0 finished
Feb  2 04:47:25 np0005604790 podman[135219]: 2026-02-02 09:47:25.234808604 +0000 UTC m=+0.977523027 container died 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:47:25 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5a962e9ea4a42bab169b75c68fc20c1cb34b24e732308c5bca5683bfbc793085-merged.mount: Deactivated successfully.
Feb  2 04:47:25 np0005604790 podman[135219]: 2026-02-02 09:47:25.292846959 +0000 UTC m=+1.035561332 container remove 7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_joliot, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 04:47:25 np0005604790 systemd[1]: libpod-conmon-7ebfe4b43ac8ef93f38db0b9758990e429ceecea90e0d6492987655264d766eb.scope: Deactivated successfully.
Feb  2 04:47:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:47:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:47:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:25 np0005604790 python3.9[135475]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:47:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb  2 04:47:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd00021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:47:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:26.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:26 np0005604790 python3.9[135653]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:47:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:26.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:27 np0005604790 python3.9[135803]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:47:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Feb  2 04:47:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:28 np0005604790 python3.9[135955]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:47:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:28.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:29 np0005604790 systemd[1]: session-47.scope: Deactivated successfully.
Feb  2 04:47:29 np0005604790 systemd[1]: session-47.scope: Consumed 5.977s CPU time.
Feb  2 04:47:29 np0005604790 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Feb  2 04:47:29 np0005604790 systemd-logind[793]: Removed session 47.
Feb  2 04:47:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Feb  2 04:47:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:30.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:47:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:47:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd00021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:47:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:34 np0005604790 systemd-logind[793]: New session 48 of user zuul.
Feb  2 04:47:34 np0005604790 systemd[1]: Started Session 48 of User zuul.
Feb  2 04:47:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:34] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:47:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:34] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:47:35 np0005604790 python3.9[136164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:47:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:36.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:36.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:37 np0005604790 python3.9[136322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:38 np0005604790 python3.9[136476]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80042e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd00021d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:38 np0005604790 python3.9[136628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:38.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:39 np0005604790 python3.9[136751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025658.2854304-156-217383972076811/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=51c8806234562d624a4d807695edbd9ecb728b59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:40 np0005604790 python3.9[136905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004300 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:40.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:47:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:40.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:47:40 np0005604790 python3.9[137028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025659.766921-156-53221838031389/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=98d348615e61a9b68b5c5fd470bc9aeb831c56b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:41 np0005604790 python3.9[137180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:41 np0005604790 python3.9[137304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025660.9150314-156-245492050022335/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=82a0f334516acd498dcb35d53a1fe292b6a9e005 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004320 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:42 np0005604790 python3.9[137457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:43 np0005604790 python3.9[137609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:44 np0005604790 python3.9[137763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:47:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:44] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:47:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:44] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:47:44 np0005604790 python3.9[137886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025663.647244-339-32708564672134/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0f081fd3db1a788ed895431615fbe1eda214919d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:45 np0005604790 python3.9[138038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:47:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2717 writes, 12K keys, 2717 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2717 writes, 2717 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2717 writes, 12K keys, 2717 commit groups, 1.0 writes per commit group, ingest: 24.01 MB, 0.04 MB/s#012Interval WAL: 2717 writes, 2717 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     49.6      0.41              0.05         5    0.082       0      0       0.0       0.0#012  L6      1/0   13.63 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.5     98.3     86.8      0.58              0.14         4    0.144     16K   1781       0.0       0.0#012 Sum      1/0   13.63 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     57.5     71.3      0.99              0.19         9    0.110     16K   1781       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     57.9     71.7      0.98              0.19         8    0.123     16K   1781       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     98.3     86.8      0.58              0.14         4    0.144     16K   1781       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.2      0.40              0.05         4    0.101       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.09 MB/s read, 1.0 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.09 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630b94e5350#2 capacity: 304.00 MB usage: 2.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000106 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(167,2.44 MB,0.80195%) FilterBlock(10,57.23 KB,0.0183858%) IndexBlock(10,111.23 KB,0.0357327%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 04:47:46 np0005604790 python3.9[138163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025665.0459929-339-10378464655635/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d08fd1db2672bef6291fde5319a05fae0b3732d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:46.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:46 np0005604790 python3.9[138317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:46.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:47:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:47:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:47:47 np0005604790 python3.9[138440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025666.3323028-339-72435349453082/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c6b67bf236f504b5538e0c7f1c2fc4850603e3a8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:48 np0005604790 python3.9[138594]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:48.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:48 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:48 np0005604790 python3.9[138746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:48.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:49 np0005604790 python3.9[138898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:49 np0005604790 python3.9[139022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025669.0009243-524-121667644632678/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=de6a6e5e1613f30ee203ae796d0a15acc64e019f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:50.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:50 np0005604790 python3.9[139175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:50 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:50.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:51 np0005604790 python3.9[139298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025670.1034374-524-4790656004218/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d08fd1db2672bef6291fde5319a05fae0b3732d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:51 np0005604790 python3.9[139451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:52 np0005604790 python3.9[139575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025671.3014295-524-199298356070641/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6f31cafc6f92c5b66a94514ff9156c77fb1ac091 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:52.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:52 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:52.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:53 np0005604790 python3.9[139727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:54 np0005604790 python3.9[139906]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:47:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:54 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:54.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:54 np0005604790 python3.9[140029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025673.7164748-725-179050025148727/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:54] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:47:54] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:47:55 np0005604790 python3.9[140181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:56 np0005604790 python3.9[140335]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.466676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676466720, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 251, "total_data_size": 800852, "memory_usage": 812728, "flush_reason": "Manual Compaction"}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676480440, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 787349, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12569, "largest_seqno": 13179, "table_properties": {"data_size": 784120, "index_size": 1137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7412, "raw_average_key_size": 18, "raw_value_size": 777574, "raw_average_value_size": 1963, "num_data_blocks": 50, "num_entries": 396, "num_filter_entries": 396, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025637, "oldest_key_time": 1770025637, "file_creation_time": 1770025676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 13858 microseconds, and 3170 cpu microseconds.
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.480534) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 787349 bytes OK
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.480559) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.482723) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.482751) EVENT_LOG_v1 {"time_micros": 1770025676482744, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.482774) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 797596, prev total WAL file size 797596, number of live WAL files 2.
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.483471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(768KB)], [29(13MB)]
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676483547, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15080911, "oldest_snapshot_seqno": -1}
Feb  2 04:47:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4235 keys, 12201893 bytes, temperature: kUnknown
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676634962, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12201893, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12171093, "index_size": 19106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108520, "raw_average_key_size": 25, "raw_value_size": 12091088, "raw_average_value_size": 2855, "num_data_blocks": 807, "num_entries": 4235, "num_filter_entries": 4235, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770025676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.635262) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12201893 bytes
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.636805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.5 rd, 80.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.6 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(34.7) write-amplify(15.5) OK, records in: 4750, records dropped: 515 output_compression: NoCompression
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.636841) EVENT_LOG_v1 {"time_micros": 1770025676636825, "job": 12, "event": "compaction_finished", "compaction_time_micros": 151508, "compaction_time_cpu_micros": 32400, "output_level": 6, "num_output_files": 1, "total_output_size": 12201893, "num_input_records": 4750, "num_output_records": 4235, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676637120, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770025676640426, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.483347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.640684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.640692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.640694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.640696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:47:56.640698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:47:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:56 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:47:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:47:56 np0005604790 python3.9[140458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025675.7844012-799-1700781401153/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:56.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:47:57 np0005604790 python3.9[140610]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:47:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:58 np0005604790 python3.9[140764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:47:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:47:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:47:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:47:58 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:47:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:47:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:47:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:47:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:47:58 np0005604790 python3.9[140887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025677.7564957-871-279970747214121/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:47:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:47:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:47:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:47:59 np0005604790 python3.9[141039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:00 np0005604790 python3.9[141193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80038a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:00 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:00.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:00 np0005604790 python3.9[141316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025679.7703373-945-233397505149971/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:01 np0005604790 python3.9[141468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:48:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:48:02 np0005604790 python3.9[141622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:02.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:02 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80038c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:02.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:02 np0005604790 python3.9[141745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025681.7618146-1020-132299078218236/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:03 np0005604790 python3.9[141897]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:04 np0005604790 python3.9[142051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:48:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:04.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:04 np0005604790 python3.9[142174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025683.7410414-1095-205759653304100/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=01ba6f1c4701862bb94c27ffc13223400c80de38 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:04 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:04.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:04] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:04] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:05 np0005604790 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Feb  2 04:48:05 np0005604790 systemd[1]: session-48.scope: Deactivated successfully.
Feb  2 04:48:05 np0005604790 systemd[1]: session-48.scope: Consumed 22.259s CPU time.
Feb  2 04:48:05 np0005604790 systemd-logind[793]: Removed session 48.
Feb  2 04:48:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd80038e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df80044e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:48:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:48:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:06 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:06.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:06.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094807 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:48:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8003900 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:08 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004500 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:08.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:08.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:48:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:08.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:10.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:10 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8003920 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:10.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:10 np0005604790 systemd-logind[793]: New session 49 of user zuul.
Feb  2 04:48:10 np0005604790 systemd[1]: Started Session 49 of User zuul.
Feb  2 04:48:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:11 np0005604790 python3.9[142362]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004520 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:48:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:12.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:12 np0005604790 python3.9[142515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:12 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:12.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:13 np0005604790 python3.9[142638]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025691.958257-57-152861229411934/.source.conf _original_basename=ceph.conf follow=False checksum=d5af35537b3c8ec6eada2ba8657e5bbbf335fb7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:14 np0005604790 python3.9[142792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8003940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:48:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004540 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:14.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:14 np0005604790 python3.9[142940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025693.5543823-57-158655180035278/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=b59eb4ee1ef760db0b0353d13f50139cad503c44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:14 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:14.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:14] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:14] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:15 np0005604790 systemd[1]: session-49.scope: Deactivated successfully.
Feb  2 04:48:15 np0005604790 systemd[1]: session-49.scope: Consumed 2.811s CPU time.
Feb  2 04:48:15 np0005604790 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Feb  2 04:48:15 np0005604790 systemd-logind[793]: Removed session 49.
Feb  2 04:48:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:15 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:48:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:48:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd8003960 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:16.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:16 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:16.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:16.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:48:17
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', 'vms', '.nfs', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.log']
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:48:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:48:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:48:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8e0c00ab20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:18 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc000e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:18.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:18.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:18.884Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:48:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:18.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:48:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:19 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:48:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:19 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:48:20 np0005604790 systemd-logind[793]: New session 50 of user zuul.
Feb  2 04:48:20 np0005604790 systemd[1]: Started Session 50 of User zuul.
Feb  2 04:48:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Feb  2 04:48:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:20.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:20 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:21 np0005604790 python3.9[143126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:48:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:48:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8dd0003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:48:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb  2 04:48:22 np0005604790 python3.9[143284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:22 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:22.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:23 np0005604790 python3.9[143436]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094823 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:48:23 np0005604790 python3.9[143587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:48:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:48:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc001940 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:24 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:24.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:24 np0005604790 python3.9[143740]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 04:48:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:48:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:24] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Feb  2 04:48:25 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:48:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Feb  2 04:48:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:26.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:26 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc002260 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:26.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:48:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 960 B/s wr, 2 op/s
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:26.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:48:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:48:27 np0005604790 python3.9[144036]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:48:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094827 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:27 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.570657368 +0000 UTC m=+0.055341503 container create c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 04:48:27 np0005604790 systemd[1]: Started libpod-conmon-c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3.scope.
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.54161557 +0000 UTC m=+0.026299795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:27 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.672125335 +0000 UTC m=+0.156809440 container init c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.683270393 +0000 UTC m=+0.167954518 container start c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.687919238 +0000 UTC m=+0.172603353 container attach c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:48:27 np0005604790 friendly_kepler[144199]: 167 167
Feb  2 04:48:27 np0005604790 systemd[1]: libpod-c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3.scope: Deactivated successfully.
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.690912618 +0000 UTC m=+0.175596753 container died c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:48:27 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c9a73a3cfdf672cf4fa8e15500391503f29147f7c8f9d9cf190ca3715638e62d-merged.mount: Deactivated successfully.
Feb  2 04:48:27 np0005604790 podman[144153]: 2026-02-02 09:48:27.748190962 +0000 UTC m=+0.232875087 container remove c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_kepler, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:48:27 np0005604790 systemd[1]: libpod-conmon-c6e6a5c192448eef2fc5ab1123ee39b0536dd6c32d693d22ca1586365aadf7d3.scope: Deactivated successfully.
Feb  2 04:48:27 np0005604790 podman[144269]: 2026-02-02 09:48:27.940759908 +0000 UTC m=+0.071055454 container create 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:48:27 np0005604790 python3.9[144261]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:48:27 np0005604790 systemd[1]: Started libpod-conmon-5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640.scope.
Feb  2 04:48:28 np0005604790 podman[144269]: 2026-02-02 09:48:27.913107248 +0000 UTC m=+0.043402844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:28 np0005604790 podman[144269]: 2026-02-02 09:48:28.060250917 +0000 UTC m=+0.190546493 container init 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 04:48:28 np0005604790 podman[144269]: 2026-02-02 09:48:28.069725851 +0000 UTC m=+0.200021407 container start 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:48:28 np0005604790 podman[144269]: 2026-02-02 09:48:28.074214651 +0000 UTC m=+0.204510257 container attach 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:48:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:28 np0005604790 epic_grothendieck[144286]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:48:28 np0005604790 epic_grothendieck[144286]: --> All data devices are unavailable
Feb  2 04:48:28 np0005604790 systemd[1]: libpod-5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640.scope: Deactivated successfully.
Feb  2 04:48:28 np0005604790 podman[144302]: 2026-02-02 09:48:28.552115308 +0000 UTC m=+0.042892990 container died 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:48:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:28.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-463d37aa8f8a2ec156655140552b869657f6de2a16727b1fbb2d2a02e7cb173f-merged.mount: Deactivated successfully.
Feb  2 04:48:28 np0005604790 podman[144302]: 2026-02-02 09:48:28.60897709 +0000 UTC m=+0.099754722 container remove 5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:48:28 np0005604790 systemd[1]: libpod-conmon-5d1b8d0b132ead746ede316aa9a3f4447e44ac52aa6e6f2d299727c1717fc640.scope: Deactivated successfully.
Feb  2 04:48:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:28 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:28.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:28.885Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:48:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:28.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:48:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 960 B/s rd, 384 B/s wr, 1 op/s
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.233581595 +0000 UTC m=+0.059255968 container create 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 04:48:29 np0005604790 systemd[1]: Started libpod-conmon-359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893.scope.
Feb  2 04:48:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.209718266 +0000 UTC m=+0.035392719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.314217284 +0000 UTC m=+0.139891947 container init 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.32041916 +0000 UTC m=+0.146093563 container start 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:48:29 np0005604790 beautiful_cray[144429]: 167 167
Feb  2 04:48:29 np0005604790 systemd[1]: libpod-359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893.scope: Deactivated successfully.
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.324647383 +0000 UTC m=+0.150321826 container attach 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.326194245 +0000 UTC m=+0.151868618 container died 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:48:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-96b53339eebbb4b6cd26d14372ae835c6fa16ba2be56821d887a29a71fcac448-merged.mount: Deactivated successfully.
Feb  2 04:48:29 np0005604790 podman[144412]: 2026-02-02 09:48:29.366638218 +0000 UTC m=+0.192312581 container remove 359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:48:29 np0005604790 systemd[1]: libpod-conmon-359576a29d8ce24011eff4dcf0ffdcc7506cddb59e0481f0eb28b48c98b02893.scope: Deactivated successfully.
Feb  2 04:48:29 np0005604790 podman[144484]: 2026-02-02 09:48:29.537803 +0000 UTC m=+0.052732303 container create 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:48:29 np0005604790 systemd[1]: Started libpod-conmon-557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99.scope.
Feb  2 04:48:29 np0005604790 podman[144484]: 2026-02-02 09:48:29.521498893 +0000 UTC m=+0.036428216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c00985d64afcdf0337531dd64ae13ff4b9156a2710e1745d1b4ad5836bc4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c00985d64afcdf0337531dd64ae13ff4b9156a2710e1745d1b4ad5836bc4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c00985d64afcdf0337531dd64ae13ff4b9156a2710e1745d1b4ad5836bc4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938c00985d64afcdf0337531dd64ae13ff4b9156a2710e1745d1b4ad5836bc4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:29 np0005604790 podman[144484]: 2026-02-02 09:48:29.655880292 +0000 UTC m=+0.170809665 container init 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Feb  2 04:48:29 np0005604790 podman[144484]: 2026-02-02 09:48:29.665639293 +0000 UTC m=+0.180568596 container start 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:48:29 np0005604790 podman[144484]: 2026-02-02 09:48:29.669160137 +0000 UTC m=+0.184089480 container attach 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:48:29 np0005604790 jovial_napier[144547]: {
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:    "1": [
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:        {
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "devices": [
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "/dev/loop3"
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            ],
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "lv_name": "ceph_lv0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "lv_size": "21470642176",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "name": "ceph_lv0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "tags": {
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.cluster_name": "ceph",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.crush_device_class": "",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.encrypted": "0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.osd_id": "1",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.type": "block",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.vdo": "0",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:                "ceph.with_tpm": "0"
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            },
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "type": "block",
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:            "vg_name": "ceph_vg0"
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:        }
Feb  2 04:48:29 np0005604790 jovial_napier[144547]:    ]
Feb  2 04:48:29 np0005604790 jovial_napier[144547]: }
Feb  2 04:48:30 np0005604790 systemd[1]: libpod-557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99.scope: Deactivated successfully.
Feb  2 04:48:30 np0005604790 podman[144484]: 2026-02-02 09:48:30.005619547 +0000 UTC m=+0.520548860 container died 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:48:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-938c00985d64afcdf0337531dd64ae13ff4b9156a2710e1745d1b4ad5836bc4a-merged.mount: Deactivated successfully.
Feb  2 04:48:30 np0005604790 podman[144484]: 2026-02-02 09:48:30.057031673 +0000 UTC m=+0.571960966 container remove 557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:48:30 np0005604790 systemd[1]: libpod-conmon-557651a334b259464429215976fbf6634920ec8d40cd09b5ecf8d81b74cafe99.scope: Deactivated successfully.
Feb  2 04:48:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:30 np0005604790 python3.9[144644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:48:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.651283475 +0000 UTC m=+0.059889135 container create 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:48:30 np0005604790 systemd[1]: Started libpod-conmon-806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7.scope.
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.624657622 +0000 UTC m=+0.033263332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:30 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:30 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc002260 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.746403162 +0000 UTC m=+0.155008882 container init 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:48:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.755994509 +0000 UTC m=+0.164600169 container start 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.760073038 +0000 UTC m=+0.168678758 container attach 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:48:30 np0005604790 relaxed_archimedes[144780]: 167 167
Feb  2 04:48:30 np0005604790 systemd[1]: libpod-806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7.scope: Deactivated successfully.
Feb  2 04:48:30 np0005604790 conmon[144780]: conmon 806179b6740afd6c1fbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7.scope/container/memory.events
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.765067422 +0000 UTC m=+0.173673072 container died 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:48:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-13c9680093bb354e07be634a10117b08f0f396b70fad941b383fc137e5c7c7d2-merged.mount: Deactivated successfully.
Feb  2 04:48:30 np0005604790 podman[144753]: 2026-02-02 09:48:30.819667234 +0000 UTC m=+0.228272894 container remove 806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 04:48:30 np0005604790 systemd[1]: libpod-conmon-806179b6740afd6c1fbdb3451aa46a4a5a276175dc504cbdacc101f2c9a2d9f7.scope: Deactivated successfully.
Feb  2 04:48:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 960 B/s rd, 384 B/s wr, 1 op/s
Feb  2 04:48:30 np0005604790 podman[144855]: 2026-02-02 09:48:30.981046805 +0000 UTC m=+0.054465879 container create d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:48:31 np0005604790 systemd[1]: Started libpod-conmon-d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1.scope.
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:30.956812816 +0000 UTC m=+0.030231890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:31 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:48:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb866d5329b8c29f8f17e6795d55115c8c77c7150a21b499038df94964c0e534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb866d5329b8c29f8f17e6795d55115c8c77c7150a21b499038df94964c0e534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb866d5329b8c29f8f17e6795d55115c8c77c7150a21b499038df94964c0e534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:31 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb866d5329b8c29f8f17e6795d55115c8c77c7150a21b499038df94964c0e534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:31.087593038 +0000 UTC m=+0.161012172 container init d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:31.102544098 +0000 UTC m=+0.175963172 container start d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:31.109621638 +0000 UTC m=+0.183040712 container attach d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:48:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:31 np0005604790 python3[144952]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb  2 04:48:31 np0005604790 lvm[145056]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:48:31 np0005604790 lvm[145056]: VG ceph_vg0 finished
Feb  2 04:48:31 np0005604790 jovial_galois[144882]: {}
Feb  2 04:48:31 np0005604790 systemd[1]: libpod-d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1.scope: Deactivated successfully.
Feb  2 04:48:31 np0005604790 systemd[1]: libpod-d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1.scope: Consumed 1.178s CPU time.
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:31.883676234 +0000 UTC m=+0.957095308 container died d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:48:31 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bb866d5329b8c29f8f17e6795d55115c8c77c7150a21b499038df94964c0e534-merged.mount: Deactivated successfully.
Feb  2 04:48:31 np0005604790 podman[144855]: 2026-02-02 09:48:31.935545173 +0000 UTC m=+1.008964247 container remove d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:48:31 np0005604790 systemd[1]: libpod-conmon-d44137c975211bb4ffeaf4baf83b1f2ab1042c1c78e66760a792e6a5e8e0f2d1.scope: Deactivated successfully.
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:48:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:48:32 np0005604790 python3.9[145219]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:32.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:32 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:32.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 960 B/s rd, 384 B/s wr, 1 op/s
Feb  2 04:48:33 np0005604790 python3.9[145371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:33 np0005604790 python3.9[145449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc0023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:34 np0005604790 python3.9[145628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:34 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:34.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:34] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:48:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:34] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:48:34 np0005604790 python3.9[145706]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a4pfo93e recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 672 B/s wr, 2 op/s
Feb  2 04:48:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:35 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:48:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:35 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:48:35 np0005604790 python3.9[145858]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:36 np0005604790 python3.9[145938]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:36 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:36.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 672 B/s wr, 2 op/s
Feb  2 04:48:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:36.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:37 np0005604790 python3.9[146090]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:38 np0005604790 python3[146245]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 04:48:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4002650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:48:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:38.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:38 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:38.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:38.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:38 np0005604790 python3.9[146397]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:48:39 np0005604790 python3.9[146522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025718.3751137-426-276611625678933/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4002650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:40 np0005604790 python3.9[146676]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:40.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:40 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:40.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:48:41 np0005604790 python3.9[146801]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025719.832148-471-59113655807512/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:41 np0005604790 python3.9[146954]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:42 np0005604790 python3.9[147080]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025721.259754-516-107136032355829/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:42 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4002650 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:42.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:48:43 np0005604790 python3.9[147232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:43 np0005604790 python3.9[147357]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025722.608226-561-200414063323487/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094843 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:48:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8ddc003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:44 np0005604790 python3.9[147511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:44 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df8004560 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:44] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:48:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:44] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Feb  2 04:48:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:48:45 np0005604790 python3.9[147636]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770025723.910315-606-73348263908099/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:45 np0005604790 python3.9[147789]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8df4003750 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:48:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[119341]: 02/02/2026 09:48:46 : epoch 6980722a : compute-0 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8de40010b0 fd 42 proxy ignored for local
Feb  2 04:48:46 np0005604790 kernel: ganesha.nfsd[142969]: segfault at 50 ip 00007f8e8ed9332e sp 00007f8df2ffc210 error 4 in libntirpc.so.5.8[7f8e8ed78000+2c000] likely on CPU 6 (core 0, socket 6)
Feb  2 04:48:46 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:48:46 np0005604790 systemd[1]: Started Process Core Dump (PID 147914/UID 0).
Feb  2 04:48:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:48:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:48:46 np0005604790 python3.9[147944]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:46.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:48:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:46.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:48:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:48:47 np0005604790 systemd-coredump[147915]: Process 119345 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 66:#012#0  0x00007f8e8ed9332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f8e8ed9d900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:47 np0005604790 systemd[1]: systemd-coredump@1-147914-0.service: Deactivated successfully.
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:48:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:48:47 np0005604790 podman[148053]: 2026-02-02 09:48:47.322613739 +0000 UTC m=+0.043454195 container died 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:48:47 np0005604790 systemd[1]: var-lib-containers-storage-overlay-24c71bd2f24efed9992ec0b8551d31d7876a427e957e903cd05f40663750662f-merged.mount: Deactivated successfully.
Feb  2 04:48:47 np0005604790 podman[148053]: 2026-02-02 09:48:47.376564002 +0000 UTC m=+0.097404408 container remove 69289cca0b92167fdd35a88d308662ef16e1dea879976039560ec621316133bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:48:47 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:48:47 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:48:47 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.508s CPU time.
Feb  2 04:48:47 np0005604790 python3.9[148119]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:48 np0005604790 python3.9[148300]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:48.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:48.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:48:49 np0005604790 python3.9[148453]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:48:49 np0005604790 python3.9[148608]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:50 np0005604790 python3.9[148764]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:48:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:50.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:48:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:52 np0005604790 python3.9[148916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:48:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094852 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:48:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:48:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:48:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:48:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:48:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:48:53 np0005604790 python3.9[149069]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:53 np0005604790 ovs-vsctl[149070]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb  2 04:48:54 np0005604790 python3.9[149228]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:54.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:54] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:48:54] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Feb  2 04:48:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:48:55 np0005604790 python3.9[149404]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:48:55 np0005604790 ovs-vsctl[149405]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb  2 04:48:55 np0005604790 python3.9[149556]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:48:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:48:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:48:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:48:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:56.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:56 np0005604790 python3.9[149711]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:48:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:56.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:57 np0005604790 python3.9[149863]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:57 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 2.
Feb  2 04:48:57 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:48:57 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.508s CPU time.
Feb  2 04:48:57 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:48:58 np0005604790 python3.9[149960]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:58 np0005604790 podman[149987]: 2026-02-02 09:48:58.043530471 +0000 UTC m=+0.056240057 container create 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:48:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70170b35864f49b812d4846571823ddbfb987e049842dd417e30f3fb0508ce6c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70170b35864f49b812d4846571823ddbfb987e049842dd417e30f3fb0508ce6c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70170b35864f49b812d4846571823ddbfb987e049842dd417e30f3fb0508ce6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70170b35864f49b812d4846571823ddbfb987e049842dd417e30f3fb0508ce6c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:48:58 np0005604790 podman[149987]: 2026-02-02 09:48:58.015948052 +0000 UTC m=+0.028657628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:48:58 np0005604790 podman[149987]: 2026-02-02 09:48:58.122203896 +0000 UTC m=+0.134913532 container init 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 04:48:58 np0005604790 podman[149987]: 2026-02-02 09:48:58.127448917 +0000 UTC m=+0.140158493 container start 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb  2 04:48:58 np0005604790 bash[149987]: 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889
Feb  2 04:48:58 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:48:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:48:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:48:58.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:58 np0005604790 python3.9[150196]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:48:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:48:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:48:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:48:58.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:48:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:48:58.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:48:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:48:59 np0005604790 python3.9[150274]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:48:59 np0005604790 python3.9[150427]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:00 np0005604790 python3.9[150580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:49:01 np0005604790 python3.9[150658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:01 np0005604790 python3.9[150811]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:49:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:49:02 np0005604790 python3.9[150890]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:49:03 np0005604790 python3.9[151042]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:49:03 np0005604790 systemd[1]: Reloading.
Feb  2 04:49:03 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:49:03 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:49:04 np0005604790 python3.9[151234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:04 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:49:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:04 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:49:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:04 np0005604790 python3.9[151312]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:04.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:04] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:49:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:04] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:49:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Feb  2 04:49:05 np0005604790 python3.9[151464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:05 np0005604790 python3.9[151543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:06 np0005604790 python3.9[151696]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:49:06 np0005604790 systemd[1]: Reloading.
Feb  2 04:49:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:06 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:49:06 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:49:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:06.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:06 np0005604790 systemd[1]: Starting Create netns directory...
Feb  2 04:49:06 np0005604790 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 04:49:06 np0005604790 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 04:49:06 np0005604790 systemd[1]: Finished Create netns directory.
Feb  2 04:49:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Feb  2 04:49:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:06.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:06.985Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:07 np0005604790 python3.9[151889]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:08 np0005604790 python3.9[152042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:08.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:08.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:08.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:49:09 np0005604790 python3.9[152165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025748.0483577-1359-166351784261613/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:10 np0005604790 python3.9[152319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e50000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:49:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6346 writes, 1060 syncs, 5.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 19.62 MB, 0.03 MB/s#012Interval WAL: 6346 writes, 1060 syncs, 5.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  2 04:49:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c001240 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:10 np0005604790 python3.9[152485]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:49:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:11 np0005604790 python3.9[152637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:12 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:12 np0005604790 python3.9[152762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025751.0974302-1458-141127795412158/.source.json _original_basename=.zp0papf3 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094912 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:49:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:12 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e50000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:12.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:12 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:49:13 np0005604790 python3.9[152913]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:14 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c001f40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:14 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:14 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:14.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:14] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:49:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:14] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:49:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:49:16 np0005604790 python3.9[153364]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb  2 04:49:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:16 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:16 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c001f40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:16.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:16 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:49:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:16.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:49:17
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'images', '.nfs']
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:49:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:49:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:49:17 np0005604790 python3.9[153516]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:49:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:18 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:18 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:18 np0005604790 python3[153670]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 04:49:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:18 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:18.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:18.892Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:18.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:49:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:20 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:20 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:20.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:20 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:20.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:49:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094921 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:49:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:22 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:22 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:22 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:49:23 np0005604790 podman[153684]: 2026-02-02 09:49:23.185967507 +0000 UTC m=+4.650104160 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e
Feb  2 04:49:23 np0005604790 podman[153809]: 2026-02-02 09:49:23.345947434 +0000 UTC m=+0.061890782 container create e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb  2 04:49:23 np0005604790 podman[153809]: 2026-02-02 09:49:23.31376727 +0000 UTC m=+0.029710658 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e
Feb  2 04:49:23 np0005604790 python3[153670]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e
Feb  2 04:49:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:24 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:24 np0005604790 python3.9[154002]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:49:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:24 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:24.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:24 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:24.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:24] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:49:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:24] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:49:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:49:25 np0005604790 python3.9[154156]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:25 np0005604790 python3.9[154232]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:49:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:26 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:26 np0005604790 python3.9[154385]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025765.6144137-1692-123241609809755/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:26 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:26.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:26 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:26 np0005604790 python3.9[154461]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:49:26 np0005604790 systemd[1]: Reloading.
Feb  2 04:49:26 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:49:26 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:49:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:49:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:26.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:27 np0005604790 python3.9[154573]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:49:27 np0005604790 systemd[1]: Reloading.
Feb  2 04:49:27 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:49:27 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:49:28 np0005604790 systemd[1]: Starting ovn_controller container...
Feb  2 04:49:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:28 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c001910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ccf887ef036be5e079172275f086adbd2354f062665e49d2418d03a8ee4285/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:28 np0005604790 systemd[1]: Started /usr/bin/podman healthcheck run e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36.
Feb  2 04:49:28 np0005604790 podman[154615]: 2026-02-02 09:49:28.281334625 +0000 UTC m=+0.154103256 container init e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + sudo -E kolla_set_configs
Feb  2 04:49:28 np0005604790 podman[154615]: 2026-02-02 09:49:28.306058364 +0000 UTC m=+0.178826985 container start e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 04:49:28 np0005604790 edpm-start-podman-container[154615]: ovn_controller
Feb  2 04:49:28 np0005604790 systemd[1]: Created slice User Slice of UID 0.
Feb  2 04:49:28 np0005604790 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb  2 04:49:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:28 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:28 np0005604790 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb  2 04:49:28 np0005604790 systemd[1]: Starting User Manager for UID 0...
Feb  2 04:49:28 np0005604790 podman[154638]: 2026-02-02 09:49:28.402453874 +0000 UTC m=+0.086484878 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:49:28 np0005604790 edpm-start-podman-container[154614]: Creating additional drop-in dependency for "ovn_controller" (e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36)
Feb  2 04:49:28 np0005604790 systemd[1]: e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36-72a4a8d8f632219c.service: Main process exited, code=exited, status=1/FAILURE
Feb  2 04:49:28 np0005604790 systemd[1]: e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36-72a4a8d8f632219c.service: Failed with result 'exit-code'.
Feb  2 04:49:28 np0005604790 systemd[1]: Reloading.
Feb  2 04:49:28 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:49:28 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:49:28 np0005604790 systemd[154668]: Queued start job for default target Main User Target.
Feb  2 04:49:28 np0005604790 systemd[154668]: Created slice User Application Slice.
Feb  2 04:49:28 np0005604790 systemd[154668]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb  2 04:49:28 np0005604790 systemd[154668]: Started Daily Cleanup of User's Temporary Directories.
Feb  2 04:49:28 np0005604790 systemd[154668]: Reached target Paths.
Feb  2 04:49:28 np0005604790 systemd[154668]: Reached target Timers.
Feb  2 04:49:28 np0005604790 systemd[154668]: Starting D-Bus User Message Bus Socket...
Feb  2 04:49:28 np0005604790 systemd[154668]: Starting Create User's Volatile Files and Directories...
Feb  2 04:49:28 np0005604790 systemd[154668]: Listening on D-Bus User Message Bus Socket.
Feb  2 04:49:28 np0005604790 systemd[154668]: Reached target Sockets.
Feb  2 04:49:28 np0005604790 systemd[154668]: Finished Create User's Volatile Files and Directories.
Feb  2 04:49:28 np0005604790 systemd[154668]: Reached target Basic System.
Feb  2 04:49:28 np0005604790 systemd[154668]: Reached target Main User Target.
Feb  2 04:49:28 np0005604790 systemd[154668]: Startup finished in 139ms.
Feb  2 04:49:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:28.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:28 np0005604790 systemd[1]: Started User Manager for UID 0.
Feb  2 04:49:28 np0005604790 systemd[1]: Started ovn_controller container.
Feb  2 04:49:28 np0005604790 systemd[1]: Started Session c1 of User root.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: INFO:__main__:Validating config file
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: INFO:__main__:Writing out command to execute
Feb  2 04:49:28 np0005604790 systemd[1]: session-c1.scope: Deactivated successfully.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: ++ cat /run_command
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + ARGS=
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + sudo kolla_copy_cacerts
Feb  2 04:49:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:28 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:28 np0005604790 systemd[1]: Started Session c2 of User root.
Feb  2 04:49:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:28 np0005604790 systemd[1]: session-c2.scope: Deactivated successfully.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + [[ ! -n '' ]]
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + . kolla_extend_start
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + umask 0022
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8477] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8482] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <warn>  [1770025768.8484] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8489] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8492] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8494] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb  2 04:49:28 np0005604790 kernel: br-int: entered promiscuous mode
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00017|main|INFO|OVS feature set changed, force recompute.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00024|main|INFO|OVS feature set changed, force recompute.
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 04:49:28 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8719] manager: (ovn-efcb63-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8737] manager: (ovn-1b0741-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8749] manager: (ovn-2f54a3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb  2 04:49:28 np0005604790 systemd-udevd[154767]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:49:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:28.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:28 np0005604790 kernel: genev_sys_6081: entered promiscuous mode
Feb  2 04:49:28 np0005604790 systemd-udevd[154769]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8977] device (genev_sys_6081): carrier: link connected
Feb  2 04:49:28 np0005604790 NetworkManager[49024]: <info>  [1770025768.8981] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Feb  2 04:49:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:29 np0005604790 python3.9[154897]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 04:49:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:30 : epoch 6980730a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:49:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:30 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:30 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:30.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:30 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:30 np0005604790 python3.9[155051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:30.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:31 np0005604790 python3.9[155174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025770.2930737-1827-229153774660894/.source.yaml _original_basename=.s64sy_up follow=False checksum=49e9dd6dd1573230eefb068866cfd1da40e184ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:49:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:49:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:32 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:32 np0005604790 python3.9[155328]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:49:32 np0005604790 ovs-vsctl[155329]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb  2 04:49:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:32 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:49:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:32 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:49:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:32.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:32 np0005604790 python3.9[155546]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:49:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:33 np0005604790 ovs-vsctl[155562]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb  2 04:49:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:33 : epoch 6980730a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:49:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:33 : epoch 6980730a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb  2 04:49:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:33 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:49:34 np0005604790 python3.9[155793]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:49:34 np0005604790 ovs-vsctl[155823]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.124163324 +0000 UTC m=+0.065980454 container create 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 04:49:34 np0005604790 systemd[1]: Started libpod-conmon-918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0.scope.
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.0934633 +0000 UTC m=+0.035280470 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:34 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.231249177 +0000 UTC m=+0.173066307 container init 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.241951961 +0000 UTC m=+0.183769081 container start 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.24592452 +0000 UTC m=+0.187741700 container attach 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 04:49:34 np0005604790 xenodochial_shockley[155826]: 167 167
Feb  2 04:49:34 np0005604790 systemd[1]: libpod-918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0.scope: Deactivated successfully.
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.250404714 +0000 UTC m=+0.192221834 container died 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:49:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5034f40815a0f1f4c20fb9b4b66a13a38ee5f49f39d22606011bff2f3b4963d9-merged.mount: Deactivated successfully.
Feb  2 04:49:34 np0005604790 podman[155809]: 2026-02-02 09:49:34.291421221 +0000 UTC m=+0.233238341 container remove 918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:49:34 np0005604790 systemd[1]: libpod-conmon-918feb4047e01bc7492027c22f84d548f4d8cc639c7ff1648e1e4f05b6af3cd0.scope: Deactivated successfully.
Feb  2 04:49:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:34 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.451606533 +0000 UTC m=+0.050865349 container create 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:49:34 np0005604790 systemd[1]: Started libpod-conmon-34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b.scope.
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.425161066 +0000 UTC m=+0.024419892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.580929447 +0000 UTC m=+0.180188313 container init 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.590392077 +0000 UTC m=+0.189650893 container start 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.594182802 +0000 UTC m=+0.193441658 container attach 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:49:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:34.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:34 np0005604790 systemd[1]: session-50.scope: Deactivated successfully.
Feb  2 04:49:34 np0005604790 systemd[1]: session-50.scope: Consumed 57.398s CPU time.
Feb  2 04:49:34 np0005604790 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Feb  2 04:49:34 np0005604790 systemd-logind[793]: Removed session 50.
Feb  2 04:49:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:34 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c003970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:34.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:49:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:34] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:49:34 np0005604790 dazzling_cray[155915]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:49:34 np0005604790 dazzling_cray[155915]: --> All data devices are unavailable
Feb  2 04:49:34 np0005604790 systemd[1]: libpod-34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b.scope: Deactivated successfully.
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.949044414 +0000 UTC m=+0.548303210 container died 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-484af0c20e749a2e734689374bedfffa17d8caeba35d8c31860a0eefc1c85c1b-merged.mount: Deactivated successfully.
Feb  2 04:49:34 np0005604790 podman[155898]: 2026-02-02 09:49:34.990403761 +0000 UTC m=+0.589662557 container remove 34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_cray, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:49:35 np0005604790 systemd[1]: libpod-conmon-34a73f303601d27e84c938cd6a1175fcd54766637ac502d61d945fac9f6f692b.scope: Deactivated successfully.
Feb  2 04:49:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.599597774 +0000 UTC m=+0.101483510 container create b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.529407505 +0000 UTC m=+0.031293221 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:35 np0005604790 systemd[1]: Started libpod-conmon-b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81.scope.
Feb  2 04:49:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.682093551 +0000 UTC m=+0.183979297 container init b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.688417595 +0000 UTC m=+0.190303291 container start b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.691724726 +0000 UTC m=+0.193610422 container attach b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:49:35 np0005604790 systemd[1]: libpod-b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81.scope: Deactivated successfully.
Feb  2 04:49:35 np0005604790 priceless_poincare[156049]: 167 167
Feb  2 04:49:35 np0005604790 conmon[156049]: conmon b9c9e8e8519fd59e209b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81.scope/container/memory.events
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.693399572 +0000 UTC m=+0.195285308 container died b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:49:35 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f362e3b6a27448eb39b04d02cacfc0032c3a08d423076ff25c3e6d5a0076d430-merged.mount: Deactivated successfully.
Feb  2 04:49:35 np0005604790 podman[156032]: 2026-02-02 09:49:35.728039024 +0000 UTC m=+0.229924720 container remove b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_poincare, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:49:35 np0005604790 systemd[1]: libpod-conmon-b9c9e8e8519fd59e209b1e704fed9ecd2b2ad4b57dfdd1a2c4f37f4cbb507b81.scope: Deactivated successfully.
Feb  2 04:49:35 np0005604790 podman[156073]: 2026-02-02 09:49:35.884869224 +0000 UTC m=+0.061852190 container create 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:35 np0005604790 systemd[1]: Started libpod-conmon-9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9.scope.
Feb  2 04:49:35 np0005604790 podman[156073]: 2026-02-02 09:49:35.862223932 +0000 UTC m=+0.039206978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129a3ab45de051021085e2098bb5e790ec8d1f143915fa71841675f0e4d2bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129a3ab45de051021085e2098bb5e790ec8d1f143915fa71841675f0e4d2bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129a3ab45de051021085e2098bb5e790ec8d1f143915fa71841675f0e4d2bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129a3ab45de051021085e2098bb5e790ec8d1f143915fa71841675f0e4d2bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:36 np0005604790 podman[156073]: 2026-02-02 09:49:36.004755159 +0000 UTC m=+0.181738145 container init 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:49:36 np0005604790 podman[156073]: 2026-02-02 09:49:36.013588792 +0000 UTC m=+0.190571758 container start 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:49:36 np0005604790 podman[156073]: 2026-02-02 09:49:36.017420797 +0000 UTC m=+0.194403773 container attach 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:49:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:36 : epoch 6980730a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:49:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:36 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]: {
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:    "1": [
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:        {
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "devices": [
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "/dev/loop3"
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            ],
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "lv_name": "ceph_lv0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "lv_size": "21470642176",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "name": "ceph_lv0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "tags": {
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.cluster_name": "ceph",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.crush_device_class": "",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.encrypted": "0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.osd_id": "1",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.type": "block",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.vdo": "0",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:                "ceph.with_tpm": "0"
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            },
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "type": "block",
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:            "vg_name": "ceph_vg0"
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:        }
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]:    ]
Feb  2 04:49:36 np0005604790 peaceful_pascal[156089]: }
Feb  2 04:49:36 np0005604790 systemd[1]: libpod-9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9.scope: Deactivated successfully.
Feb  2 04:49:36 np0005604790 podman[156073]: 2026-02-02 09:49:36.337621738 +0000 UTC m=+0.514604694 container died 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 04:49:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:36 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1129a3ab45de051021085e2098bb5e790ec8d1f143915fa71841675f0e4d2bfe-merged.mount: Deactivated successfully.
Feb  2 04:49:36 np0005604790 podman[156073]: 2026-02-02 09:49:36.388772843 +0000 UTC m=+0.565755819 container remove 9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:36 np0005604790 systemd[1]: libpod-conmon-9c168d07fd1e5de0c02016275e0753c596abb7da23aac269b3e53f168b033bc9.scope: Deactivated successfully.
Feb  2 04:49:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:36 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:36.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:36.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.080937947 +0000 UTC m=+0.061387979 container create 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:49:37 np0005604790 systemd[1]: Started libpod-conmon-269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883.scope.
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.056017032 +0000 UTC m=+0.036467124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:37 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.171945698 +0000 UTC m=+0.152395780 container init 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.181805199 +0000 UTC m=+0.162255241 container start 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.185176711 +0000 UTC m=+0.165626783 container attach 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:49:37 np0005604790 elated_euler[156218]: 167 167
Feb  2 04:49:37 np0005604790 systemd[1]: libpod-269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883.scope: Deactivated successfully.
Feb  2 04:49:37 np0005604790 conmon[156218]: conmon 269d12fd9f2b683cac98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883.scope/container/memory.events
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.190317953 +0000 UTC m=+0.170767995 container died 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:49:37 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4edad59c76d089ffbad53d6bfa9118cbc348c03cb9cd3ec52702f96b1805bb7d-merged.mount: Deactivated successfully.
Feb  2 04:49:37 np0005604790 podman[156202]: 2026-02-02 09:49:37.237412417 +0000 UTC m=+0.217862459 container remove 269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:37 np0005604790 systemd[1]: libpod-conmon-269d12fd9f2b683cac98d76b923217b2b71044f2bb41aace6b1ea8f61751f883.scope: Deactivated successfully.
Feb  2 04:49:37 np0005604790 podman[156244]: 2026-02-02 09:49:37.418603477 +0000 UTC m=+0.052419052 container create f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:49:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Feb  2 04:49:37 np0005604790 systemd[1]: Started libpod-conmon-f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91.scope.
Feb  2 04:49:37 np0005604790 podman[156244]: 2026-02-02 09:49:37.393734593 +0000 UTC m=+0.027550218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:49:37 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:49:37 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c067a5f5f2e5e33cfc501006f1f45c44143a1da7fa33c90909f41c7fb592ee4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:37 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c067a5f5f2e5e33cfc501006f1f45c44143a1da7fa33c90909f41c7fb592ee4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:37 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c067a5f5f2e5e33cfc501006f1f45c44143a1da7fa33c90909f41c7fb592ee4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:37 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c067a5f5f2e5e33cfc501006f1f45c44143a1da7fa33c90909f41c7fb592ee4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:49:37 np0005604790 podman[156244]: 2026-02-02 09:49:37.525797162 +0000 UTC m=+0.159612727 container init f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:49:37 np0005604790 podman[156244]: 2026-02-02 09:49:37.538243114 +0000 UTC m=+0.172058679 container start f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:49:37 np0005604790 podman[156244]: 2026-02-02 09:49:37.542444649 +0000 UTC m=+0.176260214 container attach f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:49:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:38 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c003970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:38 np0005604790 lvm[156336]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:49:38 np0005604790 lvm[156336]: VG ceph_vg0 finished
Feb  2 04:49:38 np0005604790 condescending_montalcini[156260]: {}
Feb  2 04:49:38 np0005604790 systemd[1]: libpod-f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91.scope: Deactivated successfully.
Feb  2 04:49:38 np0005604790 systemd[1]: libpod-f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91.scope: Consumed 1.258s CPU time.
Feb  2 04:49:38 np0005604790 podman[156244]: 2026-02-02 09:49:38.324896864 +0000 UTC m=+0.958712389 container died f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:49:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:38 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2c067a5f5f2e5e33cfc501006f1f45c44143a1da7fa33c90909f41c7fb592ee4-merged.mount: Deactivated successfully.
Feb  2 04:49:38 np0005604790 podman[156244]: 2026-02-02 09:49:38.446932768 +0000 UTC m=+1.080748353 container remove f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:49:38 np0005604790 systemd[1]: libpod-conmon-f1a4242aeb3bec44e601957e8377eda3c51ffa7f91f73b42220b06ca7f45ed91.scope: Deactivated successfully.
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:38 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:38.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:38.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:49:39 np0005604790 systemd[1]: Stopping User Manager for UID 0...
Feb  2 04:49:39 np0005604790 systemd[154668]: Activating special unit Exit the Session...
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped target Main User Target.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped target Basic System.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped target Paths.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped target Sockets.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped target Timers.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped Daily Cleanup of User's Temporary Directories.
Feb  2 04:49:39 np0005604790 systemd[154668]: Closed D-Bus User Message Bus Socket.
Feb  2 04:49:39 np0005604790 systemd[154668]: Stopped Create User's Volatile Files and Directories.
Feb  2 04:49:39 np0005604790 systemd[154668]: Removed slice User Application Slice.
Feb  2 04:49:39 np0005604790 systemd[154668]: Reached target Shutdown.
Feb  2 04:49:39 np0005604790 systemd[154668]: Finished Exit the Session.
Feb  2 04:49:39 np0005604790 systemd[154668]: Reached target Exit the Session.
Feb  2 04:49:39 np0005604790 systemd[1]: user@0.service: Deactivated successfully.
Feb  2 04:49:39 np0005604790 systemd[1]: Stopped User Manager for UID 0.
Feb  2 04:49:39 np0005604790 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb  2 04:49:39 np0005604790 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb  2 04:49:39 np0005604790 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb  2 04:49:39 np0005604790 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb  2 04:49:39 np0005604790 systemd[1]: Removed slice User Slice of UID 0.
Feb  2 04:49:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:49:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:40 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:40 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e3c003970 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:40 np0005604790 systemd-logind[793]: New session 52 of user zuul.
Feb  2 04:49:40 np0005604790 systemd[1]: Started Session 52 of User zuul.
Feb  2 04:49:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:40 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:40.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:49:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/094941 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:49:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:42 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e4c002c50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:42 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:42 np0005604790 python3.9[156544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:49:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:42 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48000df0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:42.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 102 B/s wr, 0 op/s
Feb  2 04:49:43 np0005604790 python3.9[156705]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:44 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:44 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e540013a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:44 np0005604790 python3.9[156858]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:44 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:49:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:44] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Feb  2 04:49:45 np0005604790 python3.9[157010]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:45 np0005604790 python3.9[157163]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:46 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:46 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:46 np0005604790 python3.9[157316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:46.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:46 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54002090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:46.990Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:49:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:49:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:47 np0005604790 python3.9[157467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:48 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:48 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:48 np0005604790 python3.9[157621]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb  2 04:49:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:48 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:48.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:48.895Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:48.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:49:49 np0005604790 python3.9[157771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:50 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54002090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:50 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:50 np0005604790 python3.9[157894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025789.0857012-213-49378361637416/.source follow=False _original_basename=haproxy.j2 checksum=35fdf371a5549b7e7e32a6541c07c1ac75cf4dcf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:50 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48001930 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:51 np0005604790 python3.9[158044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:49:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:51 np0005604790 python3.9[158165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025790.6578588-258-224008409220663/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:52 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:52 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:52.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:52 np0005604790 python3.9[158319]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:49:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:52 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:49:53 np0005604790 python3.9[158403]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:49:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:54 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:54 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:54 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:54] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:49:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:49:54] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:49:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:49:55 np0005604790 python3.9[158584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:49:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:56 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:56 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:49:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:49:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:49:56 np0005604790 python3.9[158738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:56 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:49:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:49:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:56.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:49:57 np0005604790 python3.9[158859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025796.3193114-369-224139770416068/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:49:57 np0005604790 python3.9[159010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:58 np0005604790 python3.9[159132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025797.5067165-369-257681854436036/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:49:58 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:58Z|00025|memory|INFO|16128 kB peak resident set size after 29.8 seconds
Feb  2 04:49:58 np0005604790 ovn_controller[154631]: 2026-02-02T09:49:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Feb  2 04:49:58 np0005604790 podman[159133]: 2026-02-02 09:49:58.682836676 +0000 UTC m=+0.131821604 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 04:49:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:49:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:49:58 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:49:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:49:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:49:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:49:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:58.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:58.897Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:49:58.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:49:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:49:59 np0005604790 python3.9[159309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 failed cephadm daemon(s)
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.fdwwab on compute-0 is in unknown state
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1 is in unknown state
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2 is in unknown state
Feb  2 04:50:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:00 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:00 np0005604790 python3.9[159431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025799.274103-501-141961636749682/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:00 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: Health detail: HEALTH_WARN 3 failed cephadm daemon(s)
Feb  2 04:50:00 np0005604790 ceph-mon[74489]: [WRN] CEPHADM_FAILED_DAEMON: 3 failed cephadm daemon(s)
Feb  2 04:50:00 np0005604790 ceph-mon[74489]:    daemon nfs.cephfs.2.0.compute-0.fdwwab on compute-0 is in unknown state
Feb  2 04:50:00 np0005604790 ceph-mon[74489]:    daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1 is in unknown state
Feb  2 04:50:00 np0005604790 ceph-mon[74489]:    daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2 is in unknown state
Feb  2 04:50:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:00.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:00 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48002da0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:50:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:50:00 np0005604790 python3.9[159581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:01 np0005604790 python3.9[159702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025800.3955412-501-183290679389715/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:02 np0005604790 python3.9[159854]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:50:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:50:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:50:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:02 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:02 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:02 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:02 np0005604790 python3.9[160008]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:50:03 np0005604790 python3.9[160160]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:04 np0005604790 python3.9[160240]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:04 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:04 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:04 np0005604790 python3.9[160392]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:04.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:04 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:04] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:50:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:04] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:50:05 np0005604790 python3.9[160470]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:05 np0005604790 python3.9[160623]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:06 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:06 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:06 np0005604790 python3.9[160776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:06.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:06 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e400023f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:06.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:06.992Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:06.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:06 np0005604790 python3.9[160854]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:07 np0005604790 python3.9[161007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:08 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:08 np0005604790 python3.9[161086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:08 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:08.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:08 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:08.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:08.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:09 np0005604790 python3.9[161238]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:50:09 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:09 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:09 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:50:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095009 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:50:10 np0005604790 python3.9[161428]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e40004060 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e500092f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:10 np0005604790 python3.9[161506]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:10 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:11 np0005604790 python3.9[161658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:11 np0005604790 python3.9[161737]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:12 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48003ea0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:12 np0005604790 kernel: ganesha.nfsd[156548]: segfault at 50 ip 00007f9edb52f32e sp 00007f9e627fb210 error 4 in libntirpc.so.5.8[7f9edb514000+2c000] likely on CPU 1 (core 0, socket 1)
Feb  2 04:50:12 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:50:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[150006]: 02/02/2026 09:50:12 : epoch 6980730a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e48003ea0 fd 47 proxy ignored for local
Feb  2 04:50:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:12.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:12 np0005604790 systemd[1]: Started Process Core Dump (PID 161892/UID 0).
Feb  2 04:50:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:13 np0005604790 python3.9[161891]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:50:13 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:13 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:13 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:13 np0005604790 systemd[1]: Starting Create netns directory...
Feb  2 04:50:13 np0005604790 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb  2 04:50:13 np0005604790 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb  2 04:50:13 np0005604790 systemd[1]: Finished Create netns directory.
Feb  2 04:50:13 np0005604790 systemd-coredump[161893]: Process 150031 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f9edb52f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:50:13 np0005604790 systemd[1]: systemd-coredump@2-161892-0.service: Deactivated successfully.
Feb  2 04:50:13 np0005604790 podman[162010]: 2026-02-02 09:50:13.984420733 +0000 UTC m=+0.045620223 container died 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 04:50:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-70170b35864f49b812d4846571823ddbfb987e049842dd417e30f3fb0508ce6c-merged.mount: Deactivated successfully.
Feb  2 04:50:14 np0005604790 podman[162010]: 2026-02-02 09:50:14.03339027 +0000 UTC m=+0.094589680 container remove 4286937416d12bb30a1d89c2e68575a0da0b3ea567a3ca0405e46c983452c889 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:50:14 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:50:14 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:50:14 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.435s CPU time.
Feb  2 04:50:14 np0005604790 python3.9[162134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:14.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:14.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:14] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:50:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:14] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Feb  2 04:50:15 np0005604790 python3.9[162312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:50:15 np0005604790 python3.9[162435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770025814.6039827-954-78217396242724/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:16 np0005604790 python3.9[162589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:16.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:16.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:50:17
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'volumes', '.nfs', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data']
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:50:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:50:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:50:17 np0005604790 python3.9[162741]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:50:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:50:18 np0005604790 python3.9[162895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095018 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:50:18 np0005604790 python3.9[163018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025817.5942318-1053-8059206124092/.source.json _original_basename=.0qvq8wsw follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:18.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:18.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:18.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:19 np0005604790 python3.9[163168]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Feb  2 04:50:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:20.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Feb  2 04:50:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:21 np0005604790 python3.9[163593]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb  2 04:50:22 np0005604790 python3.9[163747]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 04:50:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:50:24 np0005604790 python3[163900]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 04:50:24 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 3.
Feb  2 04:50:24 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:50:24 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.435s CPU time.
Feb  2 04:50:24 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:50:24 np0005604790 podman[163983]: 2026-02-02 09:50:24.564960034 +0000 UTC m=+0.064325825 container create d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:50:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a02b55ef17c94d4796a0309ea2afd71bc1d97e9d108b06d2bb46b8cea265e9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a02b55ef17c94d4796a0309ea2afd71bc1d97e9d108b06d2bb46b8cea265e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a02b55ef17c94d4796a0309ea2afd71bc1d97e9d108b06d2bb46b8cea265e9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:24 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a02b55ef17c94d4796a0309ea2afd71bc1d97e9d108b06d2bb46b8cea265e9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:24 np0005604790 podman[163983]: 2026-02-02 09:50:24.53541194 +0000 UTC m=+0.034777791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:24 np0005604790 podman[163983]: 2026-02-02 09:50:24.636422558 +0000 UTC m=+0.135788359 container init d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:50:24 np0005604790 podman[163983]: 2026-02-02 09:50:24.65081657 +0000 UTC m=+0.150182341 container start d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:50:24 np0005604790 bash[163983]: d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e
Feb  2 04:50:24 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:24 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:50:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:24.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:24] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:50:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:24] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Feb  2 04:50:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Feb  2 04:50:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:50:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:50:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:26.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:26.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:50:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.2 KiB/s wr, 162 op/s
Feb  2 04:50:30 np0005604790 podman[164097]: 2026-02-02 09:50:30.593799613 +0000 UTC m=+1.307416331 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 04:50:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 767 B/s wr, 161 op/s
Feb  2 04:50:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:32 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb  2 04:50:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:32 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb  2 04:50:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:32 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:50:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:32 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:50:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:32 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 04:50:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:50:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:50:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:32.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 853 B/s wr, 162 op/s
Feb  2 04:50:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:33 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:50:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:33 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:50:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:33 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:50:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095033 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:50:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:34.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:50:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:50:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:34.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 852 B/s wr, 161 op/s
Feb  2 04:50:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:36 np0005604790 podman[163916]: 2026-02-02 09:50:36.63374407 +0000 UTC m=+12.549721686 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 04:50:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:36.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:36 np0005604790 podman[164220]: 2026-02-02 09:50:36.873411837 +0000 UTC m=+0.027173029 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 04:50:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:36.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:37 np0005604790 podman[164220]: 2026-02-02 09:50:37.043835483 +0000 UTC m=+0.197596625 container create 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb  2 04:50:37 np0005604790 python3[163900]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 04:50:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 852 B/s wr, 161 op/s
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000012:nfs.cephfs.2: -2
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9978000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:38 np0005604790 python3.9[164427]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:38 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9958000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:38.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.3 KiB/s wr, 163 op/s
Feb  2 04:50:39 np0005604790 python3.9[164645]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:50:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 703 B/s wr, 3 op/s
Feb  2 04:50:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 874 B/s wr, 3 op/s
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:50:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:50:39 np0005604790 python3.9[164788]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.165055621 +0000 UTC m=+0.033917287 container create b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 04:50:40 np0005604790 systemd[1]: Started libpod-conmon-b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca.scope.
Feb  2 04:50:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.148894291 +0000 UTC m=+0.017755947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.257442359 +0000 UTC m=+0.126304035 container init b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 04:50:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:40 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c001cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.263275642 +0000 UTC m=+0.132137308 container start b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.268423036 +0000 UTC m=+0.137284692 container attach b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 04:50:40 np0005604790 quizzical_yonath[164909]: 167 167
Feb  2 04:50:40 np0005604790 systemd[1]: libpod-b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca.scope: Deactivated successfully.
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.272615993 +0000 UTC m=+0.141477649 container died b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:50:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bca660c3e363110c6035a9a52152720acb0b739f5ce9f4bbc4cf66eca96cd2b9-merged.mount: Deactivated successfully.
Feb  2 04:50:40 np0005604790 podman[164884]: 2026-02-02 09:50:40.315422277 +0000 UTC m=+0.184283923 container remove b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:50:40 np0005604790 systemd[1]: libpod-conmon-b1dd017ad3473d48accf65f1184f3dac433d5422a9c67d3fe612ab3354ad68ca.scope: Deactivated successfully.
Feb  2 04:50:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:50:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:40 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:50:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095040 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:50:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:40 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9970001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:40 np0005604790 podman[165021]: 2026-02-02 09:50:40.505188002 +0000 UTC m=+0.075451486 container create 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 04:50:40 np0005604790 systemd[1]: Started libpod-conmon-8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf.scope.
Feb  2 04:50:40 np0005604790 podman[165021]: 2026-02-02 09:50:40.477796918 +0000 UTC m=+0.048060402 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:40 np0005604790 podman[165021]: 2026-02-02 09:50:40.605313676 +0000 UTC m=+0.175577110 container init 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:50:40 np0005604790 podman[165021]: 2026-02-02 09:50:40.621408315 +0000 UTC m=+0.191671759 container start 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:50:40 np0005604790 podman[165021]: 2026-02-02 09:50:40.639305434 +0000 UTC m=+0.209568868 container attach 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:50:40 np0005604790 python3.9[165028]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770025840.0241957-1287-27595826426812/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:40 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:40.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:40 np0005604790 vigorous_tharp[165039]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:50:40 np0005604790 vigorous_tharp[165039]: --> All data devices are unavailable
Feb  2 04:50:41 np0005604790 systemd[1]: libpod-8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf.scope: Deactivated successfully.
Feb  2 04:50:41 np0005604790 podman[165021]: 2026-02-02 09:50:41.005280876 +0000 UTC m=+0.575544350 container died 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:50:41 np0005604790 python3.9[165125]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:50:41 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:41 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:41 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 749 B/s wr, 2 op/s
Feb  2 04:50:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:42 np0005604790 systemd[1]: var-lib-containers-storage-overlay-807391f1b8d3dd02767d0c15321c44962a22df63e7d5ef01a63293b214d2af9f-merged.mount: Deactivated successfully.
Feb  2 04:50:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:42 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:42 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c0027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:42 np0005604790 python3.9[165254]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:50:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:42 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9970001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:42 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:42.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:42.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:42 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:42 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:43 np0005604790 podman[165021]: 2026-02-02 09:50:43.054883944 +0000 UTC m=+2.625147418 container remove 8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:50:43 np0005604790 systemd[1]: libpod-conmon-8a88c22890b9e7cef92899a698762f5dcbde77477689b75c24fcbfb1a58ba1bf.scope: Deactivated successfully.
Feb  2 04:50:43 np0005604790 systemd[1]: Starting ovn_metadata_agent container...
Feb  2 04:50:43 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c9cd16f6499563d8331fc3fabbb541653c7493462c4fc51518f4d6dde3fcbc/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2c9cd16f6499563d8331fc3fabbb541653c7493462c4fc51518f4d6dde3fcbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:43 np0005604790 systemd[1]: Started /usr/bin/podman healthcheck run 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8.
Feb  2 04:50:43 np0005604790 podman[165322]: 2026-02-02 09:50:43.500278992 +0000 UTC m=+0.239985137 container init 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + sudo -E kolla_set_configs
Feb  2 04:50:43 np0005604790 podman[165322]: 2026-02-02 09:50:43.534927298 +0000 UTC m=+0.274633393 container start 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Feb  2 04:50:43 np0005604790 edpm-start-podman-container[165322]: ovn_metadata_agent
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Validating config file
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Copying service configuration files
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Writing out command to execute
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: ++ cat /run_command
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + CMD=neutron-ovn-metadata-agent
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + ARGS=
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + sudo kolla_copy_cacerts
Feb  2 04:50:43 np0005604790 podman[165366]: 2026-02-02 09:50:43.630604028 +0000 UTC m=+0.084863509 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb  2 04:50:43 np0005604790 edpm-start-podman-container[165319]: Creating additional drop-in dependency for "ovn_metadata_agent" (29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8)
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: Running command: 'neutron-ovn-metadata-agent'
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + [[ ! -n '' ]]
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + . kolla_extend_start
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + umask 0022
Feb  2 04:50:43 np0005604790 ovn_metadata_agent[165359]: + exec neutron-ovn-metadata-agent
Feb  2 04:50:43 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 750 B/s wr, 2 op/s
Feb  2 04:50:43 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:43 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:43 np0005604790 podman[165480]: 2026-02-02 09:50:43.838376755 +0000 UTC m=+0.063825721 container create 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:50:43 np0005604790 podman[165480]: 2026-02-02 09:50:43.795961372 +0000 UTC m=+0.021410328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:43 np0005604790 systemd[1]: Started ovn_metadata_agent container.
Feb  2 04:50:43 np0005604790 systemd[1]: Started libpod-conmon-16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51.scope.
Feb  2 04:50:43 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:44 np0005604790 podman[165480]: 2026-02-02 09:50:44.056028649 +0000 UTC m=+0.281477625 container init 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 04:50:44 np0005604790 podman[165480]: 2026-02-02 09:50:44.064125174 +0000 UTC m=+0.289574150 container start 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:50:44 np0005604790 systemd[1]: libpod-16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51.scope: Deactivated successfully.
Feb  2 04:50:44 np0005604790 fervent_wilbur[165501]: 167 167
Feb  2 04:50:44 np0005604790 conmon[165501]: conmon 16ec1b685b46e762bf7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51.scope/container/memory.events
Feb  2 04:50:44 np0005604790 podman[165480]: 2026-02-02 09:50:44.129077987 +0000 UTC m=+0.354526973 container attach 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 04:50:44 np0005604790 podman[165480]: 2026-02-02 09:50:44.130678921 +0000 UTC m=+0.356127897 container died 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:50:44 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3536f5113f85e1b56dca4584e8116c16aee2fb5d614b940b190b43e63e6b9268-merged.mount: Deactivated successfully.
Feb  2 04:50:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:44 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:44 np0005604790 podman[165480]: 2026-02-02 09:50:44.299844142 +0000 UTC m=+0.525293088 container remove 16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:50:44 np0005604790 systemd[1]: libpod-conmon-16ec1b685b46e762bf7b730fd00993df5e76ccb01643b5ad8ed51f02648b0a51.scope: Deactivated successfully.
Feb  2 04:50:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:44 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:44 np0005604790 podman[165581]: 2026-02-02 09:50:44.448461898 +0000 UTC m=+0.028525057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:44 np0005604790 podman[165581]: 2026-02-02 09:50:44.544264992 +0000 UTC m=+0.124328141 container create cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:50:44 np0005604790 systemd[1]: Started libpod-conmon-cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4.scope.
Feb  2 04:50:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a582e45f6e08daeafaddef2927c479b2b2c7754fe511a43ef0bb62cecd7f7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a582e45f6e08daeafaddef2927c479b2b2c7754fe511a43ef0bb62cecd7f7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a582e45f6e08daeafaddef2927c479b2b2c7754fe511a43ef0bb62cecd7f7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a582e45f6e08daeafaddef2927c479b2b2c7754fe511a43ef0bb62cecd7f7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:44 np0005604790 podman[165581]: 2026-02-02 09:50:44.667015157 +0000 UTC m=+0.247078306 container init cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:50:44 np0005604790 podman[165581]: 2026-02-02 09:50:44.676704197 +0000 UTC m=+0.256767346 container start cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:50:44 np0005604790 podman[165581]: 2026-02-02 09:50:44.681295075 +0000 UTC m=+0.261358234 container attach cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:50:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:44 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99580016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:44.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:50:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:50:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:44.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:44 np0005604790 python3.9[165698]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]: {
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:    "1": [
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:        {
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "devices": [
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "/dev/loop3"
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            ],
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "lv_name": "ceph_lv0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "lv_size": "21470642176",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "name": "ceph_lv0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "tags": {
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.cluster_name": "ceph",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.crush_device_class": "",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.encrypted": "0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.osd_id": "1",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.type": "block",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.vdo": "0",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:                "ceph.with_tpm": "0"
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            },
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "type": "block",
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:            "vg_name": "ceph_vg0"
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:        }
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]:    ]
Feb  2 04:50:44 np0005604790 friendly_bhabha[165641]: }
Feb  2 04:50:45 np0005604790 systemd[1]: libpod-cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4.scope: Deactivated successfully.
Feb  2 04:50:45 np0005604790 podman[165581]: 2026-02-02 09:50:45.02731677 +0000 UTC m=+0.607379879 container died cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:50:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f3a582e45f6e08daeafaddef2927c479b2b2c7754fe511a43ef0bb62cecd7f7f-merged.mount: Deactivated successfully.
Feb  2 04:50:45 np0005604790 podman[165581]: 2026-02-02 09:50:45.069467416 +0000 UTC m=+0.649530545 container remove cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:50:45 np0005604790 systemd[1]: libpod-conmon-cc73ba617f046d9de8449a6db53ec7f5877672ba1b65816d863cffe5e43e42e4.scope: Deactivated successfully.
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.328 165364 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.328 165364 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.328 165364 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.328 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.329 165364 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.330 165364 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.331 165364 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.332 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.333 165364 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.334 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.335 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.336 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.337 165364 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.338 165364 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.339 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.340 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.341 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.342 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.343 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.344 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.345 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.346 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.347 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.348 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.349 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.350 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.351 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.352 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.353 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.354 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.355 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.356 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.357 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.358 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.359 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.359 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.359 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.359 165364 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.359 165364 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.369 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.369 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.369 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.370 165364 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.370 165364 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.388 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 031ca08d-19ea-44b4-b1bd-33ab088eb6a6 (UUID: 031ca08d-19ea-44b4-b1bd-33ab088eb6a6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.407 165364 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.407 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.407 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.407 165364 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.409 165364 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.415 165364 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.420 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '031ca08d-19ea-44b4-b1bd-33ab088eb6a6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], external_ids={}, name=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, nb_cfg_timestamp=1770025776869, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.421 165364 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa8c6e31f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.422 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.422 165364 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.422 165364 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.422 165364 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.426 165364 DEBUG oslo_service.service [-] Started child 165809 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.429 165364 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp_26sdv4_/privsep.sock']#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.431 165809 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2006748'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.474 165809 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.474 165809 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.475 165809 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.478 165809 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.487 165809 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb  2 04:50:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.494 165809 INFO eventlet.wsgi.server [-] (165809) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.551393272 +0000 UTC m=+0.036300724 container create b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:50:45 np0005604790 systemd[1]: Started libpod-conmon-b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e.scope.
Feb  2 04:50:45 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.624752079 +0000 UTC m=+0.109659571 container init b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.534284595 +0000 UTC m=+0.019192067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.634404428 +0000 UTC m=+0.119311900 container start b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 04:50:45 np0005604790 competent_northcutt[165870]: 167 167
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.638224985 +0000 UTC m=+0.123132457 container attach b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:50:45 np0005604790 systemd[1]: libpod-b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e.scope: Deactivated successfully.
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.641033923 +0000 UTC m=+0.125941385 container died b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Feb  2 04:50:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 750 B/s wr, 2 op/s
Feb  2 04:50:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b97f59ee9e9d14fd100ee97addd5e2f8e9ef0378bd3dd4754ec82b6cef8cdb49-merged.mount: Deactivated successfully.
Feb  2 04:50:45 np0005604790 podman[165836]: 2026-02-02 09:50:45.682719016 +0000 UTC m=+0.167626508 container remove b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:50:45 np0005604790 systemd[1]: libpod-conmon-b001ab671cf602cbb5c3943fbc260f3d8d9f7e3ebeb6e3cf108a5c727caf560e.scope: Deactivated successfully.
Feb  2 04:50:45 np0005604790 podman[165952]: 2026-02-02 09:50:45.804936517 +0000 UTC m=+0.045251514 container create 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:50:45 np0005604790 systemd[1]: Started libpod-conmon-82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a.scope.
Feb  2 04:50:45 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:50:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7654f161357f9c1368e309bb70bb78e024c191534ddd67a576fe2255822653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7654f161357f9c1368e309bb70bb78e024c191534ddd67a576fe2255822653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7654f161357f9c1368e309bb70bb78e024c191534ddd67a576fe2255822653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:45 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d7654f161357f9c1368e309bb70bb78e024c191534ddd67a576fe2255822653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:50:45 np0005604790 podman[165952]: 2026-02-02 09:50:45.784719352 +0000 UTC m=+0.025034389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:50:45 np0005604790 podman[165952]: 2026-02-02 09:50:45.881558384 +0000 UTC m=+0.121873401 container init 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Feb  2 04:50:45 np0005604790 podman[165952]: 2026-02-02 09:50:45.889310961 +0000 UTC m=+0.129625948 container start 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 04:50:45 np0005604790 podman[165952]: 2026-02-02 09:50:45.892537451 +0000 UTC m=+0.132852538 container attach 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:50:45 np0005604790 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.071 165364 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.072 165364 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_26sdv4_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.963 166028 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.966 166028 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.968 166028 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:45.968 166028 INFO oslo.privsep.daemon [-] privsep daemon running as pid 166028#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.075 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c91ae4-c5c6-4d81-b15d-0d8f3faeefaf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 04:50:46 np0005604790 python3.9[166027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:50:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:46 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9970001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:46 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:46 np0005604790 lvm[166225]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:50:46 np0005604790 lvm[166225]: VG ceph_vg0 finished
Feb  2 04:50:46 np0005604790 strange_bose[166004]: {}
Feb  2 04:50:46 np0005604790 systemd[1]: libpod-82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a.scope: Deactivated successfully.
Feb  2 04:50:46 np0005604790 podman[165952]: 2026-02-02 09:50:46.559755348 +0000 UTC m=+0.800070355 container died 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 04:50:46 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0d7654f161357f9c1368e309bb70bb78e024c191534ddd67a576fe2255822653-merged.mount: Deactivated successfully.
Feb  2 04:50:46 np0005604790 python3.9[166227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770025845.6107054-1422-21192224722126/.source.yaml _original_basename=.u5cuq8gj follow=False checksum=f6b794fee8fdd156223951721cad4bcef298320f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.620 166028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.620 166028 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:50:46 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:46.620 166028 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:50:46 np0005604790 podman[165952]: 2026-02-02 09:50:46.636105858 +0000 UTC m=+0.876420895 container remove 82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_bose, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 04:50:46 np0005604790 systemd[1]: libpod-conmon-82fe19fb30c28d71775a3c59390f2c12a5805a13cbfd77e2fa6483a6e41e6f2a.scope: Deactivated successfully.
Feb  2 04:50:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:50:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:50:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:46 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:46.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:46.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:47.000Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:47.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.127 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[c7428e24-a766-4f09-bb11-4d3a1f04d694]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.131 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, column=external_ids, values=({'neutron:ovn-metadata-id': 'c6e70afb-d25e-560f-9eab-103c8533872a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 04:50:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:50:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:50:47 np0005604790 systemd[1]: session-52.scope: Deactivated successfully.
Feb  2 04:50:47 np0005604790 systemd[1]: session-52.scope: Consumed 53.634s CPU time.
Feb  2 04:50:47 np0005604790 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Feb  2 04:50:47 np0005604790 systemd-logind[793]: Removed session 52.
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.187 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.249 165364 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.249 165364 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.250 165364 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.250 165364 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.250 165364 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.250 165364 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.251 165364 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.251 165364 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.252 165364 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.252 165364 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.252 165364 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.253 165364 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.253 165364 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.253 165364 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.254 165364 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.254 165364 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.255 165364 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.255 165364 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.255 165364 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.255 165364 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.256 165364 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.256 165364 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.256 165364 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.256 165364 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.257 165364 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.257 165364 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.257 165364 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.258 165364 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.258 165364 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.258 165364 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.259 165364 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.259 165364 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.259 165364 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.259 165364 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.260 165364 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.260 165364 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.260 165364 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.261 165364 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.261 165364 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.261 165364 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.262 165364 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.262 165364 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.262 165364 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.262 165364 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.263 165364 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.263 165364 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.263 165364 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.263 165364 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.264 165364 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.264 165364 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.264 165364 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.264 165364 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.265 165364 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.265 165364 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.265 165364 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.265 165364 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.265 165364 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.266 165364 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.266 165364 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.266 165364 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.266 165364 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.267 165364 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.267 165364 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.267 165364 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.267 165364 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.268 165364 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.268 165364 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.268 165364 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.268 165364 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.269 165364 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.269 165364 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.269 165364 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.269 165364 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.270 165364 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.270 165364 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.270 165364 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.270 165364 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.271 165364 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.271 165364 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.271 165364 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.271 165364 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.272 165364 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.272 165364 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.272 165364 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.272 165364 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.273 165364 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.273 165364 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.273 165364 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.273 165364 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.274 165364 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.274 165364 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.274 165364 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.274 165364 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.275 165364 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.275 165364 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.275 165364 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.275 165364 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.276 165364 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.276 165364 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.276 165364 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.276 165364 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.276 165364 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.277 165364 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.277 165364 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.277 165364 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.277 165364 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.278 165364 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.278 165364 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.278 165364 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.279 165364 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.279 165364 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.279 165364 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.279 165364 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.280 165364 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.280 165364 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.280 165364 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.280 165364 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.281 165364 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.281 165364 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.281 165364 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.281 165364 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.282 165364 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.282 165364 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.282 165364 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.282 165364 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.283 165364 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.283 165364 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.283 165364 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.283 165364 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.284 165364 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.284 165364 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.284 165364 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.284 165364 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.285 165364 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.285 165364 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.285 165364 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.286 165364 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.286 165364 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.286 165364 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.286 165364 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.287 165364 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.287 165364 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.287 165364 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.287 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.288 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.288 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.288 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.288 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.289 165364 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.289 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.289 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.289 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.290 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.290 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.290 165364 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.290 165364 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.290 165364 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.291 165364 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.291 165364 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.291 165364 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.291 165364 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.292 165364 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.292 165364 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.292 165364 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.292 165364 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.292 165364 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.293 165364 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.294 165364 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.295 165364 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.296 165364 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.297 165364 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.298 165364 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.299 165364 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.300 165364 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.301 165364 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.302 165364 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.303 165364 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.304 165364 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.305 165364 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.306 165364 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.307 165364 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.308 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.309 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.310 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.311 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.312 165364 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.313 165364 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.313 165364 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.313 165364 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:50:47 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:50:47.313 165364 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 04:50:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:50:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 125 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:50:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:48 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f995c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:48 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:48 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9970002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:48.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:48.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:50:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:50 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:50 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:50 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:50:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:52 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9970002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:52 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f99680023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:52 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f995c001680 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:53 np0005604790 systemd-logind[793]: New session 53 of user zuul.
Feb  2 04:50:53 np0005604790 systemd[1]: Started Session 53 of User zuul.
Feb  2 04:50:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:50:54 np0005604790 python3.9[166449]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:50:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:54 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c0030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:50:54 np0005604790 kernel: ganesha.nfsd[164331]: segfault at 50 ip 00007f9a01a8932e sp 00007f99a4ff8210 error 4 in libntirpc.so.5.8[7f9a01a6e000+2c000] likely on CPU 1 (core 0, socket 1)
Feb  2 04:50:54 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:50:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[163999]: 02/02/2026 09:50:54 : epoch 69807360 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f997c0030f0 fd 38 proxy ignored for local
Feb  2 04:50:54 np0005604790 systemd[1]: Started Process Core Dump (PID 166479/UID 0).
Feb  2 04:50:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:54] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:50:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:50:54] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:50:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:54.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:55 np0005604790 python3.9[166633]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:50:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:56 np0005604790 systemd-coredump[166480]: Process 164003 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 42:#012#0  0x00007f9a01a8932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:50:56 np0005604790 systemd[1]: systemd-coredump@3-166479-0.service: Deactivated successfully.
Feb  2 04:50:56 np0005604790 podman[166751]: 2026-02-02 09:50:56.431977335 +0000 UTC m=+0.033562117 container died d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:50:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a7a02b55ef17c94d4796a0309ea2afd71bc1d97e9d108b06d2bb46b8cea265e9-merged.mount: Deactivated successfully.
Feb  2 04:50:56 np0005604790 podman[166751]: 2026-02-02 09:50:56.470648745 +0000 UTC m=+0.072233487 container remove d3bea3246df48c40064c2aaef1ae361eb7bf4aa1a2202476d368eebbac4ad16e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:50:56 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:50:56 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:50:56 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.462s CPU time.
Feb  2 04:50:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:56.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:50:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:57.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:50:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:57.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:57.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:50:57 np0005604790 python3.9[166847]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:50:57 np0005604790 systemd[1]: Reloading.
Feb  2 04:50:57 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:50:57 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:50:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:50:58 np0005604790 python3.9[167034]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:50:58 np0005604790 network[167051]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:50:58 np0005604790 network[167052]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:50:58 np0005604790 network[167053]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:50:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:50:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:50:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:50:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:50:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:50:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:50:58.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:50:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:50:58.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:50:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:51:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095100 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:51:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:51:01 np0005604790 python3.9[167317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:51:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:51:02 np0005604790 podman[167444]: 2026-02-02 09:51:02.419195667 +0000 UTC m=+0.173044613 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Feb  2 04:51:02 np0005604790 python3.9[167489]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:02.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:03 np0005604790 python3.9[167649]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:51:04 np0005604790 python3.9[167804]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:04] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:51:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:04] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:51:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:04.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:05 np0005604790 python3.9[167957]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:51:06 np0005604790 python3.9[168112]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:06 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 4.
Feb  2 04:51:06 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:51:06 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.462s CPU time.
Feb  2 04:51:06 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:51:06 np0005604790 python3.9[168265]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:51:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:06.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:06 np0005604790 podman[168334]: 2026-02-02 09:51:06.936864236 +0000 UTC m=+0.034828562 container create bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:51:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b3fa628eb6aebb33c1c94ded2b2ad48408a5fd75dad9df3acea8b84c9c59dfc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b3fa628eb6aebb33c1c94ded2b2ad48408a5fd75dad9df3acea8b84c9c59dfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b3fa628eb6aebb33c1c94ded2b2ad48408a5fd75dad9df3acea8b84c9c59dfc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b3fa628eb6aebb33c1c94ded2b2ad48408a5fd75dad9df3acea8b84c9c59dfc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:06 np0005604790 podman[168334]: 2026-02-02 09:51:06.980529604 +0000 UTC m=+0.078493930 container init bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb  2 04:51:06 np0005604790 podman[168334]: 2026-02-02 09:51:06.987812287 +0000 UTC m=+0.085776613 container start bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:06 np0005604790 bash[168334]: bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783
Feb  2 04:51:06 np0005604790 podman[168334]: 2026-02-02 09:51:06.919918584 +0000 UTC m=+0.017882930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:06 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:06 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:06 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:07.005Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:07.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:51:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:07 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:51:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:51:07 np0005604790 python3.9[168520]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:08 np0005604790 python3.9[168673]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:08.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:08.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:09 np0005604790 python3.9[168825]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:09 np0005604790 python3.9[168977]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:51:10 np0005604790 python3.9[169131]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:10.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:10.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:10 np0005604790 python3.9[169283]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:11 np0005604790 python3.9[169435]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:51:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095111 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:51:12 np0005604790 python3.9[169589]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:12.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:13 np0005604790 python3.9[169741]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:13 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:51:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:13 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:51:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:13 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 04:51:13 np0005604790 python3.9[169893]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:51:14 np0005604790 podman[170019]: 2026-02-02 09:51:14.058445574 +0000 UTC m=+0.054754818 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 04:51:14 np0005604790 python3.9[170064]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:14] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:51:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:14] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:51:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:51:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:51:15 np0005604790 python3.9[170243]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:51:15 np0005604790 python3.9[170396]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:16 np0005604790 python3.9[170549]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:51:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:17.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:51:17
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', '.nfs']
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:51:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:17 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:51:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:17 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:51:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:17 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:51:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:51:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:51:17 np0005604790 python3.9[170701]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:51:18 np0005604790 python3.9[170855]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:51:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:18.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:19 np0005604790 python3.9[171007]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:51:19 np0005604790 systemd[1]: Reloading.
Feb  2 04:51:19 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:51:19 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:51:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Feb  2 04:51:20 np0005604790 python3.9[171196]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:20.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:20 np0005604790 python3.9[171349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:21 np0005604790 python3.9[171502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:22 np0005604790 python3.9[171657]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:22 np0005604790 python3.9[171810]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:22.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:51:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:23 : epoch 6980738a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:51:23 np0005604790 python3.9[171976]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:51:24 np0005604790 python3.9[172131]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:51:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:24 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26fc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:24 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26e8000da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:24 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:24] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:24] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:24.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:24.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:26 : epoch 6980738a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:26 : epoch 6980738a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:26 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095126 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:26 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:26 np0005604790 python3.9[172289]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb  2 04:51:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:26 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26e8001ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:26.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:26.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:27.007Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:51:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:27.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:51:27 np0005604790 python3.9[172442]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:51:27 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:51:27 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:51:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:28 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:28 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:28 np0005604790 python3.9[172603]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 04:51:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:28 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:28.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:28.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:29 : epoch 6980738a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:51:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:51:29 np0005604790 python3.9[172763]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:51:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:30 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:30 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:30 np0005604790 python3.9[172849]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:51:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:30 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:30.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:51:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095131 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:51:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:51:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:51:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:32 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:32 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:32 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000023s ======
Feb  2 04:51:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Feb  2 04:51:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:32.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:33 np0005604790 podman[172863]: 2026-02-02 09:51:33.395723926 +0000 UTC m=+0.109566839 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 04:51:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:51:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:34 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:34 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:34] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:34] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:34 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26e80023f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:34.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:34.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:36 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:36 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:36 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:36.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:36.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:37.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:38 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:38 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:38 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:38.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:38.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123befd5d0 =====
Feb  2 04:51:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123befd5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:38 np0005604790 radosgw[89254]: beast: 0x7f123befd5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:38.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Feb  2 04:51:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:40 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:40 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:40 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123befd5d0 =====
Feb  2 04:51:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123befd5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:40 np0005604790 radosgw[89254]: beast: 0x7f123befd5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:40.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:51:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:42 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26c80032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:51:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[168350]: 02/02/2026 09:51:42 : epoch 6980738a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f26d8003340 fd 38 proxy ignored for local
Feb  2 04:51:42 np0005604790 kernel: ganesha.nfsd[171861]: segfault at 50 ip 00007f277f18532e sp 00007f2708ff8210 error 4 in libntirpc.so.5.8[7f277f16a000+2c000] likely on CPU 5 (core 0, socket 5)
Feb  2 04:51:42 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:51:42 np0005604790 systemd[1]: Started Process Core Dump (PID 173073/UID 0).
Feb  2 04:51:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:42.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.008000191s ======
Feb  2 04:51:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:42.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.008000191s
Feb  2 04:51:43 np0005604790 systemd-coredump[173074]: Process 168354 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f277f18532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:51:43 np0005604790 systemd[1]: systemd-coredump@4-173073-0.service: Deactivated successfully.
Feb  2 04:51:43 np0005604790 systemd[1]: systemd-coredump@4-173073-0.service: Consumed 1.001s CPU time.
Feb  2 04:51:43 np0005604790 podman[173104]: 2026-02-02 09:51:43.652127454 +0000 UTC m=+0.041823666 container died bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:43 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3b3fa628eb6aebb33c1c94ded2b2ad48408a5fd75dad9df3acea8b84c9c59dfc-merged.mount: Deactivated successfully.
Feb  2 04:51:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:51:43 np0005604790 podman[173104]: 2026-02-02 09:51:43.716653019 +0000 UTC m=+0.106349181 container remove bb5eda3e4adb2f9047efc3c04d606d2e78a24b259829abafc2f6a552fbb4c783 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:51:43 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:51:43 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:51:43 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.148s CPU time.
Feb  2 04:51:44 np0005604790 podman[173151]: 2026-02-02 09:51:44.352407248 +0000 UTC m=+0.066262718 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Feb  2 04:51:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:44] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:44] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:51:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:44.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:51:45.361 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:51:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:51:45.362 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:51:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:51:45.362 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:51:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:51:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:46.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:47.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:51:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:47.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:51:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:47.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:47 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.331162875 +0000 UTC m=+0.056033854 container create 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 04:51:48 np0005604790 systemd[1]: Started libpod-conmon-36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed.scope.
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.297818472 +0000 UTC m=+0.022689491 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.423585304 +0000 UTC m=+0.148456303 container init 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.431042612 +0000 UTC m=+0.155913581 container start 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.437087186 +0000 UTC m=+0.161958275 container attach 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:51:48 np0005604790 beautiful_darwin[173371]: 167 167
Feb  2 04:51:48 np0005604790 systemd[1]: libpod-36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed.scope: Deactivated successfully.
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.440263391 +0000 UTC m=+0.165134380 container died 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 04:51:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095148 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:51:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3d6b0b32ef6493d3cb662d8ad4c85afb6003f3bdb1a2b6b74ec50f31cba744a8-merged.mount: Deactivated successfully.
Feb  2 04:51:48 np0005604790 podman[173354]: 2026-02-02 09:51:48.488350856 +0000 UTC m=+0.213221815 container remove 36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 04:51:48 np0005604790 systemd[1]: libpod-conmon-36c8dd18b7d2445d417b9c003019a83aaf08a2bb3d40f38a7017a0642ec476ed.scope: Deactivated successfully.
Feb  2 04:51:48 np0005604790 podman[173396]: 2026-02-02 09:51:48.653599548 +0000 UTC m=+0.066180236 container create 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:51:48 np0005604790 podman[173396]: 2026-02-02 09:51:48.618561014 +0000 UTC m=+0.031141782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:48 np0005604790 systemd[1]: Started libpod-conmon-8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4.scope.
Feb  2 04:51:48 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:48 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:48 np0005604790 podman[173396]: 2026-02-02 09:51:48.762848518 +0000 UTC m=+0.175429276 container init 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:51:48 np0005604790 podman[173396]: 2026-02-02 09:51:48.773701416 +0000 UTC m=+0.186282134 container start 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:51:48 np0005604790 podman[173396]: 2026-02-02 09:51:48.7826987 +0000 UTC m=+0.195279508 container attach 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:48.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:51:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:48.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:51:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:48.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:51:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:48.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:49 np0005604790 frosty_hopper[173412]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:51:49 np0005604790 frosty_hopper[173412]: --> All data devices are unavailable
Feb  2 04:51:49 np0005604790 systemd[1]: libpod-8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4.scope: Deactivated successfully.
Feb  2 04:51:49 np0005604790 podman[173396]: 2026-02-02 09:51:49.13105758 +0000 UTC m=+0.543638268 container died 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:51:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f03f3172e7c9b9d14580b91bf1811d1f3cacbc743ed6988ebbd5ea674e1b32d3-merged.mount: Deactivated successfully.
Feb  2 04:51:49 np0005604790 podman[173396]: 2026-02-02 09:51:49.217325572 +0000 UTC m=+0.629906290 container remove 8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:51:49 np0005604790 systemd[1]: libpod-conmon-8716a5d43d6f7ee2590b329530e08376d4c628ffcd9712aa7013a8a5530d18a4.scope: Deactivated successfully.
Feb  2 04:51:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.832262575 +0000 UTC m=+0.065807776 container create 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:51:49 np0005604790 systemd[1]: Started libpod-conmon-677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a.scope.
Feb  2 04:51:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.803164203 +0000 UTC m=+0.036709474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.903787627 +0000 UTC m=+0.137332918 container init 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.910573359 +0000 UTC m=+0.144118590 container start 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.915018124 +0000 UTC m=+0.148563355 container attach 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:51:49 np0005604790 heuristic_hofstadter[173548]: 167 167
Feb  2 04:51:49 np0005604790 systemd[1]: libpod-677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a.scope: Deactivated successfully.
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.917963854 +0000 UTC m=+0.151509085 container died 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 04:51:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b5c87d39592785b6a62c69f035d9fcc763ff1bb1df708066e874dc943569d7c1-merged.mount: Deactivated successfully.
Feb  2 04:51:49 np0005604790 podman[173530]: 2026-02-02 09:51:49.962171186 +0000 UTC m=+0.195716417 container remove 677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:51:49 np0005604790 systemd[1]: libpod-conmon-677df08104876b0f3564c16a757ca9679afc57249b9a42c6a66c53b4778f878a.scope: Deactivated successfully.
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.138973143 +0000 UTC m=+0.060748327 container create f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:51:50 np0005604790 systemd[1]: Started libpod-conmon-f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127.scope.
Feb  2 04:51:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.113999009 +0000 UTC m=+0.035774273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd431bdc6f74e20a9c9f2d43857e1bf25a2cba2f2379bafd9bb1c14d7809256/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd431bdc6f74e20a9c9f2d43857e1bf25a2cba2f2379bafd9bb1c14d7809256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd431bdc6f74e20a9c9f2d43857e1bf25a2cba2f2379bafd9bb1c14d7809256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd431bdc6f74e20a9c9f2d43857e1bf25a2cba2f2379bafd9bb1c14d7809256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.224333794 +0000 UTC m=+0.146108988 container init f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.239171327 +0000 UTC m=+0.160946541 container start f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.262689197 +0000 UTC m=+0.184464431 container attach f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]: {
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:    "1": [
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:        {
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "devices": [
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "/dev/loop3"
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            ],
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "lv_name": "ceph_lv0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "lv_size": "21470642176",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "name": "ceph_lv0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "tags": {
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.cluster_name": "ceph",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.crush_device_class": "",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.encrypted": "0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.osd_id": "1",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.type": "block",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.vdo": "0",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:                "ceph.with_tpm": "0"
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            },
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "type": "block",
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:            "vg_name": "ceph_vg0"
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:        }
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]:    ]
Feb  2 04:51:50 np0005604790 upbeat_lichterman[173590]: }
Feb  2 04:51:50 np0005604790 systemd[1]: libpod-f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127.scope: Deactivated successfully.
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.554556132 +0000 UTC m=+0.476331386 container died f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Feb  2 04:51:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9bd431bdc6f74e20a9c9f2d43857e1bf25a2cba2f2379bafd9bb1c14d7809256-merged.mount: Deactivated successfully.
Feb  2 04:51:50 np0005604790 podman[173573]: 2026-02-02 09:51:50.621166507 +0000 UTC m=+0.542941731 container remove f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:51:50 np0005604790 systemd[1]: libpod-conmon-f6d54fed0ae73a43bf83c2a528c882b1a209a82b324750b0b6a77a6af58b7127.scope: Deactivated successfully.
Feb  2 04:51:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000023s ======
Feb  2 04:51:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Feb  2 04:51:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:50.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.238525528 +0000 UTC m=+0.051616159 container create 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:51:51 np0005604790 systemd[1]: Started libpod-conmon-82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895.scope.
Feb  2 04:51:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.219591447 +0000 UTC m=+0.032682098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.319151866 +0000 UTC m=+0.132242517 container init 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.328089209 +0000 UTC m=+0.141179850 container start 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:51:51 np0005604790 infallible_elion[173720]: 167 167
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.332893433 +0000 UTC m=+0.145984084 container attach 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:51:51 np0005604790 systemd[1]: libpod-82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895.scope: Deactivated successfully.
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.333675832 +0000 UTC m=+0.146766473 container died 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-011656101b27f24063ffb337069e92579e5efc9be62d361af38125d95eac0504-merged.mount: Deactivated successfully.
Feb  2 04:51:51 np0005604790 podman[173704]: 2026-02-02 09:51:51.378913128 +0000 UTC m=+0.192003759 container remove 82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:51:51 np0005604790 systemd[1]: libpod-conmon-82941a3f10412e183fa87d0bbfe4abfd8966b7d71302d73bc1ae9e06a9ef7895.scope: Deactivated successfully.
Feb  2 04:51:51 np0005604790 podman[173744]: 2026-02-02 09:51:51.56218586 +0000 UTC m=+0.055067352 container create 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:51:51 np0005604790 systemd[1]: Started libpod-conmon-6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620.scope.
Feb  2 04:51:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:51:51 np0005604790 podman[173744]: 2026-02-02 09:51:51.536920008 +0000 UTC m=+0.029801510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc3550d2b3f8f69d94ad5469000a3179826889867b2602f4e8690fa5d9e751a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc3550d2b3f8f69d94ad5469000a3179826889867b2602f4e8690fa5d9e751a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc3550d2b3f8f69d94ad5469000a3179826889867b2602f4e8690fa5d9e751a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:51 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cc3550d2b3f8f69d94ad5469000a3179826889867b2602f4e8690fa5d9e751a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:51 np0005604790 podman[173744]: 2026-02-02 09:51:51.66223013 +0000 UTC m=+0.155111622 container init 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:51:51 np0005604790 podman[173744]: 2026-02-02 09:51:51.670343773 +0000 UTC m=+0.163225255 container start 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:51:51 np0005604790 podman[173744]: 2026-02-02 09:51:51.674566934 +0000 UTC m=+0.167448476 container attach 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 04:51:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 305 B/s rd, 0 op/s
Feb  2 04:51:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:52 np0005604790 lvm[173836]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:51:52 np0005604790 lvm[173836]: VG ceph_vg0 finished
Feb  2 04:51:52 np0005604790 pedantic_hawking[173761]: {}
Feb  2 04:51:52 np0005604790 systemd[1]: libpod-6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620.scope: Deactivated successfully.
Feb  2 04:51:52 np0005604790 podman[173744]: 2026-02-02 09:51:52.365343832 +0000 UTC m=+0.858225294 container died 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:51:52 np0005604790 systemd[1]: libpod-6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620.scope: Consumed 1.014s CPU time.
Feb  2 04:51:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6cc3550d2b3f8f69d94ad5469000a3179826889867b2602f4e8690fa5d9e751a-merged.mount: Deactivated successfully.
Feb  2 04:51:52 np0005604790 podman[173744]: 2026-02-02 09:51:52.409115003 +0000 UTC m=+0.901996485 container remove 6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hawking, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:51:52 np0005604790 systemd[1]: libpod-conmon-6048673c6dfcf570d17414937bae233f011013671f740996ffff9bc9216d5620.scope: Deactivated successfully.
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:52.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:52 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:51:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 B/s rd, 0 op/s
Feb  2 04:51:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095153 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:51:54 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 5.
Feb  2 04:51:54 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:51:54 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.148s CPU time.
Feb  2 04:51:54 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:51:54 np0005604790 podman[173925]: 2026-02-02 09:51:54.254529206 +0000 UTC m=+0.049337695 container create 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:51:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2c5e895eccd89268503e010fd4e6b64282c0da4734e24cf312b6ef560b1e21/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2c5e895eccd89268503e010fd4e6b64282c0da4734e24cf312b6ef560b1e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2c5e895eccd89268503e010fd4e6b64282c0da4734e24cf312b6ef560b1e21/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:54 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb2c5e895eccd89268503e010fd4e6b64282c0da4734e24cf312b6ef560b1e21/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:51:54 np0005604790 podman[173925]: 2026-02-02 09:51:54.313005087 +0000 UTC m=+0.107813606 container init 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:51:54 np0005604790 podman[173925]: 2026-02-02 09:51:54.23117047 +0000 UTC m=+0.025979029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:51:54 np0005604790 podman[173925]: 2026-02-02 09:51:54.325770931 +0000 UTC m=+0.120579420 container start 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 04:51:54 np0005604790 bash[173925]: 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226
Feb  2 04:51:54 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:51:54 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:51:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:54] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:51:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:51:54] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:51:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:54.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:54.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 B/s rd, 0 op/s
Feb  2 04:51:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:51:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:56.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:57.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 B/s rd, 0 op/s
Feb  2 04:51:57 np0005604790 kernel: SELinux:  Converting 2785 SID table entries...
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:51:57 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:51:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:51:58.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:51:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:51:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:51:58.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:51:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:51:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:51:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:51:58.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:51:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb  2 04:52:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:00 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:52:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:00 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:52:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000023s ======
Feb  2 04:52:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Feb  2 04:52:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:00.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Feb  2 04:52:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:52:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:52:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:02.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:52:04 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb  2 04:52:04 np0005604790 podman[174026]: 2026-02-02 09:52:04.428547453 +0000 UTC m=+0.132511814 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 04:52:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:04] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Feb  2 04:52:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:04] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Feb  2 04:52:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:04.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:52:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:06 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:52:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:52:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:52:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:07.014Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:52:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:07.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:52:07 np0005604790 kernel: SELinux:  Converting 2785 SID table entries...
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:52:07 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:52:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:52:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:08 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:08 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54dc000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:08 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d0000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:08.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:52:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:52:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:09 : epoch 698073ba : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:52:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:09 : epoch 698073ba : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:52:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Feb  2 04:52:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:10 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d0000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095210 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:52:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:10 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d8000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:10 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:10.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Feb  2 04:52:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:12 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:12 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d0001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:12 : epoch 698073ba : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:52:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:12 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:52:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:52:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Feb  2 04:52:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:14 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f40021f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:14 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:14] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Feb  2 04:52:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:14] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Feb  2 04:52:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:14 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d0001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:15.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:15 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb  2 04:52:15 np0005604790 podman[174109]: 2026-02-02 09:52:15.161451971 +0000 UTC m=+0.068057451 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 04:52:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Feb  2 04:52:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095215 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:52:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:16 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:16 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f40021f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:16 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000024s ======
Feb  2 04:52:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Feb  2 04:52:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:17.016Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:52:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:17.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:52:17
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', '.nfs', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta']
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:52:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:52:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:52:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Feb  2 04:52:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:18 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d0001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:18 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54d8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:18 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54f40021f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:18.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000023s ======
Feb  2 04:52:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Feb  2 04:52:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:19.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:19 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 04:52:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Feb  2 04:52:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:20 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:20 np0005604790 kernel: ganesha.nfsd[174058]: segfault at 50 ip 00007f557d42c32e sp 00007f54f3ffe210 error 4 in libntirpc.so.5.8[7f557d411000+2c000] likely on CPU 3 (core 0, socket 3)
Feb  2 04:52:20 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:52:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[173941]: 02/02/2026 09:52:20 : epoch 698073ba : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f54ec001c00 fd 42 proxy ignored for local
Feb  2 04:52:20 np0005604790 systemd[1]: Started Process Core Dump (PID 175703/UID 0).
Feb  2 04:52:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb  2 04:52:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:22 np0005604790 systemd-coredump[175718]: Process 173945 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f557d42c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:52:22 np0005604790 systemd[1]: systemd-coredump@5-175703-0.service: Deactivated successfully.
Feb  2 04:52:22 np0005604790 podman[177179]: 2026-02-02 09:52:22.79952749 +0000 UTC m=+0.033829838 container died 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:52:22 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bb2c5e895eccd89268503e010fd4e6b64282c0da4734e24cf312b6ef560b1e21-merged.mount: Deactivated successfully.
Feb  2 04:52:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:23 np0005604790 podman[177179]: 2026-02-02 09:52:23.180963687 +0000 UTC m=+0.415266035 container remove 14fb2c69d1b5a9f2686b83e2173abcbf6b0493d0994e530d9085db8214991226 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:52:23 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:52:23 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:52:23 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.142s CPU time.
Feb  2 04:52:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Feb  2 04:52:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:24] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Feb  2 04:52:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:24] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Feb  2 04:52:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:25.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:26.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:27.018Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:27.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095228 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:52:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:28.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:28.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:29.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:31.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:52:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:52:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:32.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:33.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:33 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 6.
Feb  2 04:52:33 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:52:33 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.142s CPU time.
Feb  2 04:52:33 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:52:33 np0005604790 podman[184871]: 2026-02-02 09:52:33.772382046 +0000 UTC m=+0.065941729 container create cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:52:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:52:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5b69d8f050d445292e323c6dae9377f6008544ddbb523f2540dc13e3e23a40/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5b69d8f050d445292e323c6dae9377f6008544ddbb523f2540dc13e3e23a40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5b69d8f050d445292e323c6dae9377f6008544ddbb523f2540dc13e3e23a40/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba5b69d8f050d445292e323c6dae9377f6008544ddbb523f2540dc13e3e23a40/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:33 np0005604790 podman[184871]: 2026-02-02 09:52:33.831007558 +0000 UTC m=+0.124567221 container init cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:52:33 np0005604790 podman[184871]: 2026-02-02 09:52:33.743579774 +0000 UTC m=+0.037139517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:33 np0005604790 podman[184871]: 2026-02-02 09:52:33.83927825 +0000 UTC m=+0.132837913 container start cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:52:33 np0005604790 bash[184871]: cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554
Feb  2 04:52:33 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:52:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:33 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:52:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:52:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:52:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:35.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:35 np0005604790 podman[186019]: 2026-02-02 09:52:35.367357288 +0000 UTC m=+0.148374539 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:52:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:37.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:37.019Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:52:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:37.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:37.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:52:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:38.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:39.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:39.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=sqlstore.transactions t=2026-02-02T09:52:39.481956922Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=sqlstore.transactions t=2026-02-02T09:52:39.511764621Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=cleanup t=2026-02-02T09:52:39.524174794Z level=info msg="Completed cleanup jobs" duration=56.645869ms
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana.update.checker t=2026-02-02T09:52:39.592521056Z level=info msg="Update check succeeded" duration=50.524384ms
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugins.update.checker t=2026-02-02T09:52:39.59826509Z level=info msg="Update check succeeded" duration=53.256018ms
Feb  2 04:52:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:39 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:52:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:39 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:52:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:41.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:52:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:52:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:52:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:52:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:45.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:45 np0005604790 podman[191235]: 2026-02-02 09:52:45.34456016 +0000 UTC m=+0.064214013 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:52:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:52:45.362 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:52:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:52:45.363 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:52:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:52:45.363 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:52:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:46 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:47.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:47.020Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:52:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:47.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:52:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:47.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:52:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:52:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:52:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:48 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095248 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:52:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:48 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:48.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:48 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:49.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:49.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:52:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:50 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:50 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:50 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:51.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:52:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:52 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:52 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:52 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:52:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:53.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:52:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:53.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 04:52:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:52:53 np0005604790 kernel: SELinux:  Converting 2786 SID table entries...
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability network_peer_controls=1
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability open_perms=1
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability extended_socket_class=1
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability always_check_network=0
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  2 04:52:53 np0005604790 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb  2 04:52:53 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:54 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:54 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d80016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:52:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:52:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:52:54 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:52:54 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb  2 04:52:54 np0005604790 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Feb  2 04:52:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:54 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:55.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:55.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 04:52:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:52:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:56 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check update: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:52:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:56 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.606999 +0000 UTC m=+0.065441346 container create c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:52:56 np0005604790 systemd[1]: Started libpod-conmon-c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a.scope.
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.575411583 +0000 UTC m=+0.033853929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.713858515 +0000 UTC m=+0.172300831 container init c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.718710185 +0000 UTC m=+0.177152521 container start c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:52:56 np0005604790 nervous_cerf[191567]: 167 167
Feb  2 04:52:56 np0005604790 systemd[1]: libpod-c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a.scope: Deactivated successfully.
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.753427936 +0000 UTC m=+0.211870242 container attach c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.754505424 +0000 UTC m=+0.212947740 container died c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:52:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f573c41f773720139f9361e13b718c24e3a4f445eff42b834197f06f7758a444-merged.mount: Deactivated successfully.
Feb  2 04:52:56 np0005604790 podman[191544]: 2026-02-02 09:52:56.819086436 +0000 UTC m=+0.277528742 container remove c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_cerf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 04:52:56 np0005604790 systemd[1]: libpod-conmon-c546b6796ee8b780b9c0a36eecaebe0bee10e318b12053d6e76fae7566912c5a.scope: Deactivated successfully.
Feb  2 04:52:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:56 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:52:56 np0005604790 podman[191594]: 2026-02-02 09:52:56.960138198 +0000 UTC m=+0.065694013 container create 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:52:57 np0005604790 systemd[1]: Started libpod-conmon-3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05.scope.
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:56.919697393 +0000 UTC m=+0.025253288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:52:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:57.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:52:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:57.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:52:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:57.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:57.118660378 +0000 UTC m=+0.224216193 container init 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:57.128301526 +0000 UTC m=+0.233857351 container start 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:57.149144885 +0000 UTC m=+0.254700730 container attach 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:52:57 np0005604790 ceph-mon[74489]: Health check update: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 04:52:57 np0005604790 peaceful_golick[191610]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:52:57 np0005604790 peaceful_golick[191610]: --> All data devices are unavailable
Feb  2 04:52:57 np0005604790 systemd[1]: libpod-3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05.scope: Deactivated successfully.
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:57.473117061 +0000 UTC m=+0.578672846 container died 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:52:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a0f96ccae2623509ad59da956d2b517555cf7539adfdb54ea80a636a85af6635-merged.mount: Deactivated successfully.
Feb  2 04:52:57 np0005604790 podman[191594]: 2026-02-02 09:52:57.555195302 +0000 UTC m=+0.660751097 container remove 3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:52:57 np0005604790 systemd[1]: libpod-conmon-3457d76ce0a9210ad70f716b1d3e8f31467abf34278d92f93ec8c6c676d7df05.scope: Deactivated successfully.
Feb  2 04:52:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.139595249 +0000 UTC m=+0.060736050 container create ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:52:58 np0005604790 systemd[1]: Started libpod-conmon-ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25.scope.
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.119952632 +0000 UTC m=+0.041093413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.23661449 +0000 UTC m=+0.157755331 container init ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.248351805 +0000 UTC m=+0.169492606 container start ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:52:58 np0005604790 strange_wiles[191787]: 167 167
Feb  2 04:52:58 np0005604790 systemd[1]: libpod-ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25.scope: Deactivated successfully.
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.257829828 +0000 UTC m=+0.178970629 container attach ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.258582819 +0000 UTC m=+0.179723620 container died ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:52:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b60edc56472db92d003add1830067a43e927343b2b2da8eaf7ef07ab0905092c-merged.mount: Deactivated successfully.
Feb  2 04:52:58 np0005604790 podman[191744]: 2026-02-02 09:52:58.293824294 +0000 UTC m=+0.214965095 container remove ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:52:58 np0005604790 systemd[1]: libpod-conmon-ed74b208d203c873d0ec8c99102447163fbc3c7bb9c438cee095654f6866ff25.scope: Deactivated successfully.
Feb  2 04:52:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:58 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:58 np0005604790 podman[191841]: 2026-02-02 09:52:58.448811449 +0000 UTC m=+0.070645485 container create b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb  2 04:52:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:58 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:58 np0005604790 systemd[1]: Started libpod-conmon-b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf.scope.
Feb  2 04:52:58 np0005604790 podman[191841]: 2026-02-02 09:52:58.410947054 +0000 UTC m=+0.032781140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:52:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72841e3251f452e805dbe61983e5e431b6db2b0e876e059b5d5b22759b89b04a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72841e3251f452e805dbe61983e5e431b6db2b0e876e059b5d5b22759b89b04a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72841e3251f452e805dbe61983e5e431b6db2b0e876e059b5d5b22759b89b04a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72841e3251f452e805dbe61983e5e431b6db2b0e876e059b5d5b22759b89b04a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:52:58 np0005604790 podman[191841]: 2026-02-02 09:52:58.54211036 +0000 UTC m=+0.163944436 container init b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:52:58 np0005604790 podman[191841]: 2026-02-02 09:52:58.55327111 +0000 UTC m=+0.175105146 container start b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:52:58 np0005604790 podman[191841]: 2026-02-02 09:52:58.556499696 +0000 UTC m=+0.178333732 container attach b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]: {
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:    "1": [
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:        {
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "devices": [
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "/dev/loop3"
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            ],
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "lv_name": "ceph_lv0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "lv_size": "21470642176",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "name": "ceph_lv0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "tags": {
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.cluster_name": "ceph",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.crush_device_class": "",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.encrypted": "0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.osd_id": "1",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.type": "block",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.vdo": "0",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:                "ceph.with_tpm": "0"
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            },
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "type": "block",
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:            "vg_name": "ceph_vg0"
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:        }
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]:    ]
Feb  2 04:52:58 np0005604790 stupefied_feynman[191866]: }
Feb  2 04:52:58 np0005604790 systemd[1]: libpod-b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf.scope: Deactivated successfully.
Feb  2 04:52:58 np0005604790 podman[191934]: 2026-02-02 09:52:58.90239197 +0000 UTC m=+0.038371630 container died b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:52:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:52:58.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:52:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-72841e3251f452e805dbe61983e5e431b6db2b0e876e059b5d5b22759b89b04a-merged.mount: Deactivated successfully.
Feb  2 04:52:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:52:58 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:52:58 np0005604790 podman[191934]: 2026-02-02 09:52:58.948349622 +0000 UTC m=+0.084329242 container remove b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_feynman, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:52:58 np0005604790 systemd[1]: libpod-conmon-b03980d530defcf631b63b9f052e9621404d3fef01e1f40f890b1e8934d32faf.scope: Deactivated successfully.
Feb  2 04:52:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:52:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:52:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:52:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:52:59.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.634972951 +0000 UTC m=+0.060845503 container create fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:52:59 np0005604790 systemd[1]: Started libpod-conmon-fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4.scope.
Feb  2 04:52:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.595758529 +0000 UTC m=+0.021631131 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.703261011 +0000 UTC m=+0.129133603 container init fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.712344625 +0000 UTC m=+0.138217187 container start fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.71587267 +0000 UTC m=+0.141745302 container attach fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:52:59 np0005604790 flamboyant_bell[192097]: 167 167
Feb  2 04:52:59 np0005604790 systemd[1]: libpod-fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4.scope: Deactivated successfully.
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.719580389 +0000 UTC m=+0.145452981 container died fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:52:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ab1988fdf62626bb502706f65dd7e4daf3e97b7c0a1f3fd3119ca8caa894b0db-merged.mount: Deactivated successfully.
Feb  2 04:52:59 np0005604790 podman[192080]: 2026-02-02 09:52:59.757527756 +0000 UTC m=+0.183400348 container remove fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:52:59 np0005604790 systemd[1]: libpod-conmon-fb6a9a5b5c82b7bec9c439cddf3c1c0f11ffd6b749fbae34a58ae38da874fec4.scope: Deactivated successfully.
Feb  2 04:52:59 np0005604790 podman[192122]: 2026-02-02 09:52:59.940666636 +0000 UTC m=+0.067051058 container create 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:52:59 np0005604790 systemd[1]: Started libpod-conmon-221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468.scope.
Feb  2 04:53:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:52:59.912319356 +0000 UTC m=+0.038703828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:53:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8ca0484233eed1c5a4e0f843708d862c9e4b3f0fb3ed7fa1776a79ce62a538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8ca0484233eed1c5a4e0f843708d862c9e4b3f0fb3ed7fa1776a79ce62a538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8ca0484233eed1c5a4e0f843708d862c9e4b3f0fb3ed7fa1776a79ce62a538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c8ca0484233eed1c5a4e0f843708d862c9e4b3f0fb3ed7fa1776a79ce62a538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:53:00.027365741 +0000 UTC m=+0.153750123 container init 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:53:00.034752969 +0000 UTC m=+0.161137381 container start 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:53:00.044551802 +0000 UTC m=+0.170936184 container attach 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:53:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 op/s
Feb  2 04:53:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:00 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:00 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:00 np0005604790 lvm[192212]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:53:00 np0005604790 lvm[192212]: VG ceph_vg0 finished
Feb  2 04:53:00 np0005604790 ecstatic_lehmann[192138]: {}
Feb  2 04:53:00 np0005604790 lvm[192216]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:53:00 np0005604790 lvm[192216]: VG ceph_vg0 finished
Feb  2 04:53:00 np0005604790 systemd[1]: libpod-221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468.scope: Deactivated successfully.
Feb  2 04:53:00 np0005604790 systemd[1]: libpod-221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468.scope: Consumed 1.177s CPU time.
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:53:00.793344537 +0000 UTC m=+0.919729029 container died 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:53:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7c8ca0484233eed1c5a4e0f843708d862c9e4b3f0fb3ed7fa1776a79ce62a538-merged.mount: Deactivated successfully.
Feb  2 04:53:00 np0005604790 podman[192122]: 2026-02-02 09:53:00.868182424 +0000 UTC m=+0.994566846 container remove 221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:53:00 np0005604790 systemd[1]: libpod-conmon-221a13ab7488cda376853960af14765375da3e20dd78144fa3d6339fc7fc1468.scope: Deactivated successfully.
Feb  2 04:53:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:53:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:00 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:01.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:01.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:53:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 op/s
Feb  2 04:53:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:53:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:53:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:02 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:02 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:02 np0005604790 systemd[1]: Stopping OpenSSH server daemon...
Feb  2 04:53:02 np0005604790 systemd[1]: sshd.service: Deactivated successfully.
Feb  2 04:53:02 np0005604790 systemd[1]: Stopped OpenSSH server daemon.
Feb  2 04:53:02 np0005604790 systemd[1]: sshd.service: Consumed 2.789s CPU time, read 32.0K from disk, written 0B to disk.
Feb  2 04:53:02 np0005604790 systemd[1]: Stopped target sshd-keygen.target.
Feb  2 04:53:02 np0005604790 systemd[1]: Stopping sshd-keygen.target...
Feb  2 04:53:02 np0005604790 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:53:02 np0005604790 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:53:02 np0005604790 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb  2 04:53:02 np0005604790 systemd[1]: Reached target sshd-keygen.target.
Feb  2 04:53:02 np0005604790 systemd[1]: Starting OpenSSH server daemon...
Feb  2 04:53:02 np0005604790 systemd[1]: Started OpenSSH server daemon.
Feb  2 04:53:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:02 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:03.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 op/s
Feb  2 04:53:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:04 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:04 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:04 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:53:04 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:53:04 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:53:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:53:04 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:04 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:04 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:05.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:05 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:53:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 299 B/s rd, 0 op/s
Feb  2 04:53:06 np0005604790 podman[194688]: 2026-02-02 09:53:06.432534326 +0000 UTC m=+0.144821294 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb  2 04:53:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:06 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:06 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:06 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:07.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:08 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:08 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:08 np0005604790 python3.9[197227]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:53:08 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:08 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:08 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:08.924Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:08.925Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:08 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:09.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:09.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:09 np0005604790 python3.9[198738]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:53:09 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:09 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:09 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:53:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:10 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:10 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:10 np0005604790 python3.9[200032]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:53:10 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:10 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:10 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:10 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5d8002bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:11.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:11 np0005604790 python3.9[201597]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:53:11 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:11 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:11 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:13.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:13 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:13 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:13 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00ad10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:13.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:13 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:53:13 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:53:13 np0005604790 systemd[1]: man-db-cache-update.service: Consumed 9.403s CPU time.
Feb  2 04:53:13 np0005604790 systemd[1]: run-rb0a4c504fbc048b48962c94d4d4c85ed.service: Deactivated successfully.
Feb  2 04:53:13 np0005604790 python3.9[202538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:13 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:13 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:13 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:53:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:14 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:14 np0005604790 python3.9[202730]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:53:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:53:14 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:15.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:15 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:15 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:15 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:15 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5bc000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:15 np0005604790 podman[202837]: 2026-02-02 09:53:15.509704938 +0000 UTC m=+0.083571802 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb  2 04:53:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:16 np0005604790 python3.9[202965]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:16 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:16 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:16 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:16 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5bc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:17.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:53:17
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'images']
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:53:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:17 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:17 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c8000fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:17.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:53:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:17 np0005604790 python3.9[203157]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:53:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:53:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:18 np0005604790 python3.9[203314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:18 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:18 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:18 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:18 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:18.926Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:19.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:19 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:19 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:19 np0005604790 python3.9[203504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb  2 04:53:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:53:20 np0005604790 systemd[1]: Reloading.
Feb  2 04:53:20 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:53:20 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:53:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:20 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:20 np0005604790 systemd[1]: Listening on libvirt proxy daemon socket.
Feb  2 04:53:20 np0005604790 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb  2 04:53:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:21 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:21 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:21 np0005604790 python3.9[203700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:22 np0005604790 python3.9[203857]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:22 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:23 np0005604790 python3.9[204012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:23 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:23 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:23 np0005604790 python3.9[204168]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:23 np0005604790 auditd[701]: Audit daemon rotating log files
Feb  2 04:53:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:53:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:24 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:24 np0005604790 python3.9[204324]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:24] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:24] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:25 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:25 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:25.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:25 np0005604790 python3.9[204479]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:26 np0005604790 python3.9[204636]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:26 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:27.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:27 np0005604790 python3.9[204791]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:27 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:27 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:27 np0005604790 python3.9[204946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:28 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:28 np0005604790 python3.9[205103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:28.927Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:29 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:29 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:29 np0005604790 python3.9[205258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:53:30 np0005604790 python3.9[205415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:30 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5cc003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:31 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5c80024a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:31 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:31.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:31 np0005604790 python3.9[205570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:32 np0005604790 python3.9[205726]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb  2 04:53:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:53:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:53:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:53:32 np0005604790 kernel: ganesha.nfsd[191260]: segfault at 50 ip 00007fb66ee3b32e sp 00007fb5f6ffc210 error 4 in libntirpc.so.5.8[7fb66ee20000+2c000] likely on CPU 7 (core 0, socket 7)
Feb  2 04:53:32 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:53:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[184959]: 02/02/2026 09:53:32 : epoch 698073e1 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb5ec00a7e0 fd 38 proxy ignored for local
Feb  2 04:53:32 np0005604790 systemd[1]: Started Process Core Dump (PID 205755/UID 0).
Feb  2 04:53:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:33.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:33 np0005604790 python3.9[205884]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:33 np0005604790 systemd-coredump[205756]: Process 184987 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007fb66ee3b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:53:33 np0005604790 systemd[1]: systemd-coredump@6-205755-0.service: Deactivated successfully.
Feb  2 04:53:33 np0005604790 systemd[1]: systemd-coredump@6-205755-0.service: Consumed 1.020s CPU time.
Feb  2 04:53:33 np0005604790 podman[205958]: 2026-02-02 09:53:33.708846544 +0000 UTC m=+0.038580923 container died cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:53:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ba5b69d8f050d445292e323c6dae9377f6008544ddbb523f2540dc13e3e23a40-merged.mount: Deactivated successfully.
Feb  2 04:53:33 np0005604790 podman[205958]: 2026-02-02 09:53:33.749367668 +0000 UTC m=+0.079102017 container remove cc8d385bb31dce460e139764d02c23dfa99c7a4c5f0300ca3fe5ba342f5c2554 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:53:33 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:53:33 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:53:33 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.385s CPU time.
Feb  2 04:53:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:53:34 np0005604790 python3.9[206084]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:34 np0005604790 python3.9[206237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:35.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:35.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:35 np0005604790 python3.9[206389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:35 np0005604790 python3.9[206543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:53:36 np0005604790 python3.9[206720]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:53:36 np0005604790 podman[206721]: 2026-02-02 09:53:36.598189932 +0000 UTC m=+0.079156370 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:53:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:37.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:37.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:37 np0005604790 python3.9[206898]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:53:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095337 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:53:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:53:38 np0005604790 python3.9[207052]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095338 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:53:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:38.928Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:39 np0005604790 python3.9[207177]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026017.6315856-1641-210526704727246/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:39.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:39 np0005604790 python3.9[207329]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:53:40 np0005604790 python3.9[207456]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026019.2680032-1641-125732368104670/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:40 np0005604790 python3.9[207608]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:41 np0005604790 python3.9[207733]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026020.5045497-1641-16910013969349/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb  2 04:53:42 np0005604790 python3.9[207887]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:42 np0005604790 python3.9[208012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026021.6150384-1641-56324182646951/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:43 np0005604790 python3.9[208164]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:43 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 7.
Feb  2 04:53:43 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:53:43 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.385s CPU time.
Feb  2 04:53:43 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:53:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:53:44 np0005604790 python3.9[208292]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026022.8435647-1641-138885781964585/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:44 np0005604790 podman[208345]: 2026-02-02 09:53:44.189546277 +0000 UTC m=+0.045350008 container create a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:53:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c2bc195061a6557dff7bd63b33981cd1e551c6df4f18df3768f381080444b6/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c2bc195061a6557dff7bd63b33981cd1e551c6df4f18df3768f381080444b6/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c2bc195061a6557dff7bd63b33981cd1e551c6df4f18df3768f381080444b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c2bc195061a6557dff7bd63b33981cd1e551c6df4f18df3768f381080444b6/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:53:44 np0005604790 podman[208345]: 2026-02-02 09:53:44.247795135 +0000 UTC m=+0.103598926 container init a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:53:44 np0005604790 podman[208345]: 2026-02-02 09:53:44.255715481 +0000 UTC m=+0.111519212 container start a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:53:44 np0005604790 bash[208345]: a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb
Feb  2 04:53:44 np0005604790 podman[208345]: 2026-02-02 09:53:44.163339192 +0000 UTC m=+0.019142943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:53:44 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:44 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:53:44 np0005604790 python3.9[208553]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:53:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:45.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:45 np0005604790 python3.9[208678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026024.2848525-1641-213366619147186/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:53:45.363 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:53:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:53:45.364 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:53:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:53:45.364 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:53:45 np0005604790 podman[208803]: 2026-02-02 09:53:45.81061198 +0000 UTC m=+0.097472469 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 04:53:45 np0005604790 python3.9[208852]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb  2 04:53:46 np0005604790 python3.9[208976]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026025.439699-1641-85398176762975/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:47.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:53:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:53:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:47.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:53:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:53:47 np0005604790 python3.9[209128]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:53:47 np0005604790 python3.9[209255]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1770026026.9201875-1641-219241271045121/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Feb  2 04:53:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:48.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:49 np0005604790 python3.9[209407]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb  2 04:53:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:49 np0005604790 python3.9[209561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:53:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:50 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:53:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:50 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:53:50 np0005604790 python3.9[209714]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:51.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:51 np0005604790 python3.9[209866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:51 np0005604790 python3.9[210019]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:53:52 np0005604790 python3.9[210172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:53 np0005604790 python3.9[210324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:53.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:53 np0005604790 python3.9[210477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Feb  2 04:53:54 np0005604790 python3.9[210630]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:54] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:53:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:53:54] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:53:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:55.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:53:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:55.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:53:55 np0005604790 python3.9[210782]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:56 np0005604790 python3.9[210935]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:53:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:56 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:56 np0005604790 python3.9[211127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:53:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:57.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:57 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78d0001ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:57 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78a8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095357 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:53:57 np0005604790 python3.9[211279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:53:58 np0005604790 python3.9[211433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095358 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:53:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:58 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:58 np0005604790 python3.9[211585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:53:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:53:58.929Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:53:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:53:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:53:59.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:53:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:59 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78c4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:53:59 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78d00029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:53:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:53:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:53:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:53:59.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:53:59 np0005604790 python3.9[211737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Feb  2 04:54:00 np0005604790 python3.9[211862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026039.1471334-2304-163398768475841/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:00 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78d00029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:00 np0005604790 python3.9[212014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:01.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:01 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:01 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:01.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:01 np0005604790 python3.9[212137]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026040.4346938-2304-55332586272931/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Feb  2 04:54:02 np0005604790 python3.9[212360]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 609 B/s wr, 3 op/s
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:54:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:02 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78d00029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:02 np0005604790 python3.9[212545]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026041.684251-2304-143183406158414/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.701256011 +0000 UTC m=+0.050042595 container create 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:54:02 np0005604790 systemd[1]: Started libpod-conmon-7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090.scope.
Feb  2 04:54:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.673576856 +0000 UTC m=+0.022363530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.784864951 +0000 UTC m=+0.133651575 container init 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.792409527 +0000 UTC m=+0.141196121 container start 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.797419453 +0000 UTC m=+0.146206127 container attach 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:54:02 np0005604790 keen_hopper[212629]: 167 167
Feb  2 04:54:02 np0005604790 systemd[1]: libpod-7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090.scope: Deactivated successfully.
Feb  2 04:54:02 np0005604790 conmon[212629]: conmon 7c0088db3cd4652fbee0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090.scope/container/memory.events
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.802825191 +0000 UTC m=+0.151611795 container died 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:54:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a386e628c4700099e66799aded56a8321f81c2f92bd4f2ec473c6c7755a924c8-merged.mount: Deactivated successfully.
Feb  2 04:54:02 np0005604790 podman[212587]: 2026-02-02 09:54:02.851913409 +0000 UTC m=+0.200700013 container remove 7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 04:54:02 np0005604790 systemd[1]: libpod-conmon-7c0088db3cd4652fbee04e3b8a8fb18362a835c5ef203bcdf3dea9251d898090.scope: Deactivated successfully.
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.01877702 +0000 UTC m=+0.058130077 container create b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:54:03 np0005604790 systemd[1]: Started libpod-conmon-b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835.scope.
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:02.995261618 +0000 UTC m=+0.034614715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:03 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.124462852 +0000 UTC m=+0.163815959 container init b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.138237437 +0000 UTC m=+0.177590494 container start b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.142824342 +0000 UTC m=+0.182177359 container attach b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 04:54:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:03 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78d00029e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:03 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:03.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:03 np0005604790 python3.9[212799]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:03 np0005604790 competent_jemison[212795]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:54:03 np0005604790 competent_jemison[212795]: --> All data devices are unavailable
Feb  2 04:54:03 np0005604790 systemd[1]: libpod-b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835.scope: Deactivated successfully.
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.514632931 +0000 UTC m=+0.553985958 container died b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:54:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1fb95c28b9db4f586ccc1182af29208ccc618b438b68ef9997af6c05133291af-merged.mount: Deactivated successfully.
Feb  2 04:54:03 np0005604790 podman[212727]: 2026-02-02 09:54:03.562795794 +0000 UTC m=+0.602148851 container remove b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Feb  2 04:54:03 np0005604790 systemd[1]: libpod-conmon-b3d140da5eef9827fbdae8787a2597f387a066a4cd3dbc9d883a3afc19d5d835.scope: Deactivated successfully.
Feb  2 04:54:03 np0005604790 python3.9[212991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026042.7803648-2304-230867454981866/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 101 B/s wr, 0 op/s
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.159801354 +0000 UTC m=+0.058087075 container create d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:54:04 np0005604790 systemd[1]: Started libpod-conmon-d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd.scope.
Feb  2 04:54:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.134300498 +0000 UTC m=+0.032586309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.239563749 +0000 UTC m=+0.137849470 container init d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.248138833 +0000 UTC m=+0.146424554 container start d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.252466851 +0000 UTC m=+0.150752652 container attach d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 04:54:04 np0005604790 upbeat_borg[213152]: 167 167
Feb  2 04:54:04 np0005604790 systemd[1]: libpod-d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd.scope: Deactivated successfully.
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.2549987 +0000 UTC m=+0.153284501 container died d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:54:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3f8e8ca53de3fd143fd3c264bec95abd89ef8370097d52ba505b7f5ab45e1ca5-merged.mount: Deactivated successfully.
Feb  2 04:54:04 np0005604790 podman[213105]: 2026-02-02 09:54:04.32173238 +0000 UTC m=+0.220018121 container remove d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_borg, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:54:04 np0005604790 systemd[1]: libpod-conmon-d0053843162b642cf667a9b4408daa76f01f534f6ecb6bc35699e49dead784bd.scope: Deactivated successfully.
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.508341258 +0000 UTC m=+0.058223239 container create 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Feb  2 04:54:04 np0005604790 kernel: ganesha.nfsd[211075]: segfault at 50 ip 00007f795683032e sp 00007f78e27fb210 error 4 in libntirpc.so.5.8[7f7956815000+2c000] likely on CPU 2 (core 0, socket 2)
Feb  2 04:54:04 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:54:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[208384]: 02/02/2026 09:54:04 : epoch 69807428 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f78cc001f50 fd 39 proxy ignored for local
Feb  2 04:54:04 np0005604790 systemd[1]: Started Process Core Dump (PID 213243/UID 0).
Feb  2 04:54:04 np0005604790 systemd[1]: Started libpod-conmon-2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893.scope.
Feb  2 04:54:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba5cb3b48d5840b641fb37971b2eb611ad2ea1ef0eeda5d2138f9c640f8715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba5cb3b48d5840b641fb37971b2eb611ad2ea1ef0eeda5d2138f9c640f8715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba5cb3b48d5840b641fb37971b2eb611ad2ea1ef0eeda5d2138f9c640f8715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba5cb3b48d5840b641fb37971b2eb611ad2ea1ef0eeda5d2138f9c640f8715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.485442594 +0000 UTC m=+0.035324585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.591686141 +0000 UTC m=+0.141568082 container init 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 04:54:04 np0005604790 python3.9[213223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.597869179 +0000 UTC m=+0.147751140 container start 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.600884752 +0000 UTC m=+0.150766783 container attach 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 04:54:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:04] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]: {
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:    "1": [
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:        {
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "devices": [
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "/dev/loop3"
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            ],
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "lv_name": "ceph_lv0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "lv_size": "21470642176",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "name": "ceph_lv0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "tags": {
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.cluster_name": "ceph",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.crush_device_class": "",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.encrypted": "0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.osd_id": "1",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.type": "block",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.vdo": "0",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:                "ceph.with_tpm": "0"
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            },
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "type": "block",
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:            "vg_name": "ceph_vg0"
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:        }
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]:    ]
Feb  2 04:54:04 np0005604790 dreamy_franklin[213247]: }
Feb  2 04:54:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:04] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:54:04 np0005604790 systemd[1]: libpod-2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893.scope: Deactivated successfully.
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.929128602 +0000 UTC m=+0.479010583 container died 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:54:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-aaba5cb3b48d5840b641fb37971b2eb611ad2ea1ef0eeda5d2138f9c640f8715-merged.mount: Deactivated successfully.
Feb  2 04:54:04 np0005604790 podman[213229]: 2026-02-02 09:54:04.987627748 +0000 UTC m=+0.537509729 container remove 2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:54:05 np0005604790 systemd[1]: libpod-conmon-2b118b544bbb4b103b76516dd15c97aa6fcb75bfe751ff82e3f6a5b49b7b3893.scope: Deactivated successfully.
Feb  2 04:54:05 np0005604790 python3.9[213378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026044.0335095-2304-257447832494594/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:05.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:05 np0005604790 systemd-coredump[213244]: Process 208392 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007f795683032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:54:05 np0005604790 systemd[1]: systemd-coredump@7-213243-0.service: Deactivated successfully.
Feb  2 04:54:05 np0005604790 podman[213588]: 2026-02-02 09:54:05.484589139 +0000 UTC m=+0.038450029 container died a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:54:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-28c2bc195061a6557dff7bd63b33981cd1e551c6df4f18df3768f381080444b6-merged.mount: Deactivated successfully.
Feb  2 04:54:05 np0005604790 podman[213624]: 2026-02-02 09:54:05.56311162 +0000 UTC m=+0.070270667 container remove a25b00163c945590c94dad2117ef48d185c3990f69c139dcb34cbc78e54e37bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:54:05 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.593136819 +0000 UTC m=+0.064726246 container create 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 04:54:05 np0005604790 systemd[1]: Started libpod-conmon-8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c.scope.
Feb  2 04:54:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.573698069 +0000 UTC m=+0.045287536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.68448638 +0000 UTC m=+0.156075827 container init 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.690452043 +0000 UTC m=+0.162041470 container start 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.693420644 +0000 UTC m=+0.165010071 container attach 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:54:05 np0005604790 tender_euler[213680]: 167 167
Feb  2 04:54:05 np0005604790 systemd[1]: libpod-8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c.scope: Deactivated successfully.
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.697640019 +0000 UTC m=+0.169229466 container died 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 04:54:05 np0005604790 python3.9[213657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2a7fffd7e5d405d5c82d25de1e0a1086af719554557af72892a9ec63e7274938-merged.mount: Deactivated successfully.
Feb  2 04:54:05 np0005604790 podman[213647]: 2026-02-02 09:54:05.746761898 +0000 UTC m=+0.218351325 container remove 8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:54:05 np0005604790 systemd[1]: libpod-conmon-8273e88e162b93aec3bce7607578c29ef7ec43602b4263c19742ff0f81f0445c.scope: Deactivated successfully.
Feb  2 04:54:05 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:54:05 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.146s CPU time.
Feb  2 04:54:05 np0005604790 podman[213738]: 2026-02-02 09:54:05.8994045 +0000 UTC m=+0.060571692 container create cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:54:05 np0005604790 systemd[1]: Started libpod-conmon-cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05.scope.
Feb  2 04:54:05 np0005604790 podman[213738]: 2026-02-02 09:54:05.877900734 +0000 UTC m=+0.039068006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:54:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b4064960b0c2f68e91905c9ea637c6d3aeed12ace04e3f218b5693f9765bf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b4064960b0c2f68e91905c9ea637c6d3aeed12ace04e3f218b5693f9765bf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b4064960b0c2f68e91905c9ea637c6d3aeed12ace04e3f218b5693f9765bf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b4064960b0c2f68e91905c9ea637c6d3aeed12ace04e3f218b5693f9765bf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:06 np0005604790 podman[213738]: 2026-02-02 09:54:06.009519772 +0000 UTC m=+0.170686954 container init cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 04:54:06 np0005604790 podman[213738]: 2026-02-02 09:54:06.021744816 +0000 UTC m=+0.182911998 container start cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:54:06 np0005604790 podman[213738]: 2026-02-02 09:54:06.027338568 +0000 UTC m=+0.188505770 container attach cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:54:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 101 B/s wr, 0 op/s
Feb  2 04:54:06 np0005604790 python3.9[213859]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026045.2444546-2304-220241989289264/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:06 np0005604790 lvm[214064]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:54:06 np0005604790 lvm[214064]: VG ceph_vg0 finished
Feb  2 04:54:06 np0005604790 determined_chatelet[213802]: {}
Feb  2 04:54:06 np0005604790 systemd[1]: libpod-cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05.scope: Deactivated successfully.
Feb  2 04:54:06 np0005604790 systemd[1]: libpod-cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05.scope: Consumed 1.105s CPU time.
Feb  2 04:54:06 np0005604790 podman[213738]: 2026-02-02 09:54:06.756707117 +0000 UTC m=+0.917874299 container died cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 04:54:06 np0005604790 podman[214051]: 2026-02-02 09:54:06.777781722 +0000 UTC m=+0.107720049 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 04:54:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-12b4064960b0c2f68e91905c9ea637c6d3aeed12ace04e3f218b5693f9765bf7-merged.mount: Deactivated successfully.
Feb  2 04:54:06 np0005604790 podman[213738]: 2026-02-02 09:54:06.81402458 +0000 UTC m=+0.975191762 container remove cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:54:06 np0005604790 systemd[1]: libpod-conmon-cab3a8587abbfed5e8118d62ad0b2b19b8da9bb749ba2137f6902a3403919a05.scope: Deactivated successfully.
Feb  2 04:54:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:54:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:54:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:06 np0005604790 python3.9[214100]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:07.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:54:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:54:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000028s ======
Feb  2 04:54:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:07.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Feb  2 04:54:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:54:07 np0005604790 python3.9[214267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026046.447351-2304-206696029175212/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 406 B/s rd, 101 B/s wr, 0 op/s
Feb  2 04:54:08 np0005604790 python3.9[214421]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:08 np0005604790 python3.9[214544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026047.6200597-2304-198673027332274/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:08.930Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:09.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:09 np0005604790 python3.9[214696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:10 np0005604790 python3.9[214821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026048.8897152-2304-209419266453799/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095410 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:54:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 203 B/s rd, 0 op/s
Feb  2 04:54:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095410 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:54:10 np0005604790 python3.9[214973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:11.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:11.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:11 np0005604790 python3.9[215096]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026050.1977928-2304-112249622416844/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:11 np0005604790 python3.9[215249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 203 B/s rd, 0 op/s
Feb  2 04:54:12 np0005604790 python3.9[215373]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026051.4308703-2304-12835287893752/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:13 np0005604790 python3.9[215525]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:13 np0005604790 python3.9[215648]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026052.6392384-2304-233306838669064/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:54:14 np0005604790 python3.9[215802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:14] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:54:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:14] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Feb  2 04:54:15 np0005604790 python3.9[215925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026053.9275599-2304-90538606625590/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:15.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:15 np0005604790 python3.9[216077]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:15 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 8.
Feb  2 04:54:15 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:54:15 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.146s CPU time.
Feb  2 04:54:15 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:54:16 np0005604790 podman[216174]: 2026-02-02 09:54:16.083150365 +0000 UTC m=+0.078674646 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 04:54:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:54:16 np0005604790 podman[216288]: 2026-02-02 09:54:16.192922509 +0000 UTC m=+0.037585676 container create a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:54:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb047f23eb9914ed5ff81475fe4db45e835fe5544f8f613533e418c3cccacec4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb047f23eb9914ed5ff81475fe4db45e835fe5544f8f613533e418c3cccacec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb047f23eb9914ed5ff81475fe4db45e835fe5544f8f613533e418c3cccacec4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb047f23eb9914ed5ff81475fe4db45e835fe5544f8f613533e418c3cccacec4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:54:16 np0005604790 podman[216288]: 2026-02-02 09:54:16.258644251 +0000 UTC m=+0.103307438 container init a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:54:16 np0005604790 podman[216288]: 2026-02-02 09:54:16.262975909 +0000 UTC m=+0.107639076 container start a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 04:54:16 np0005604790 bash[216288]: a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1
Feb  2 04:54:16 np0005604790 podman[216288]: 2026-02-02 09:54:16.175276198 +0000 UTC m=+0.019939465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:54:16 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:54:16 np0005604790 python3.9[216276]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026055.2088025-2304-33755242676434/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:54:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:54:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:17.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:54:17
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'images', '.nfs', 'vms', 'backups', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:54:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:54:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:54:17 np0005604790 python3.9[216494]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:54:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:17.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:54:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:54:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:54:18 np0005604790 python3.9[216651]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb  2 04:54:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:18.931Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:54:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:18.933Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:54:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:19.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:20 np0005604790 dbus-broker-launch[780]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb  2 04:54:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Feb  2 04:54:20 np0005604790 python3.9[216809]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:21 np0005604790 python3.9[216961]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:21 np0005604790 python3.9[217113]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Feb  2 04:54:22 np0005604790 python3.9[217267]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:22 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:54:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:22 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:54:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:23.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:23 np0005604790 python3.9[217419]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:24 np0005604790 python3.9[217573]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:54:24 np0005604790 python3.9[217725]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:24] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:54:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:24] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:54:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:25.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:25 np0005604790 python3.9[217877]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:26 np0005604790 python3.9[218031]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:54:26 np0005604790 python3.9[218183]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:27.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:27.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:27 np0005604790 python3.9[218335]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:54:27 np0005604790 systemd[1]: Reloading.
Feb  2 04:54:27 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:54:27 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:54:28 np0005604790 systemd[1]: Starting libvirt logging daemon socket...
Feb  2 04:54:28 np0005604790 systemd[1]: Listening on libvirt logging daemon socket.
Feb  2 04:54:28 np0005604790 systemd[1]: Starting libvirt logging daemon admin socket...
Feb  2 04:54:28 np0005604790 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb  2 04:54:28 np0005604790 systemd[1]: Starting libvirt logging daemon...
Feb  2 04:54:28 np0005604790 systemd[1]: Started libvirt logging daemon.
Feb  2 04:54:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc328000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:28 np0005604790 python3.9[218546]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:54:28 np0005604790 systemd[1]: Reloading.
Feb  2 04:54:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:28.934Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:28 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:54:28 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:54:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:29.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:29 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:29 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:29 np0005604790 systemd[1]: Starting libvirt nodedev daemon socket...
Feb  2 04:54:29 np0005604790 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb  2 04:54:29 np0005604790 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb  2 04:54:29 np0005604790 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb  2 04:54:29 np0005604790 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb  2 04:54:29 np0005604790 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb  2 04:54:29 np0005604790 systemd[1]: Starting libvirt nodedev daemon...
Feb  2 04:54:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:29.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:29 np0005604790 systemd[1]: Started libvirt nodedev daemon.
Feb  2 04:54:30 np0005604790 python3.9[218763]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:54:30 np0005604790 systemd[1]: Reloading.
Feb  2 04:54:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095430 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:54:30 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:54:30 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:54:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Feb  2 04:54:30 np0005604790 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb  2 04:54:30 np0005604790 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb  2 04:54:30 np0005604790 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb  2 04:54:30 np0005604790 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb  2 04:54:30 np0005604790 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb  2 04:54:30 np0005604790 systemd[1]: Starting libvirt proxy daemon...
Feb  2 04:54:30 np0005604790 systemd[1]: Started libvirt proxy daemon.
Feb  2 04:54:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095430 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:54:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:30 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:30 np0005604790 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb  2 04:54:30 np0005604790 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb  2 04:54:30 np0005604790 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb  2 04:54:31 np0005604790 python3.9[218982]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:54:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:31 np0005604790 systemd[1]: Reloading.
Feb  2 04:54:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:31 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:31 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:31 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:54:31 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:54:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:31 np0005604790 systemd[1]: Listening on libvirt locking daemon socket.
Feb  2 04:54:31 np0005604790 systemd[1]: Starting libvirt QEMU daemon socket...
Feb  2 04:54:31 np0005604790 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  2 04:54:31 np0005604790 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb  2 04:54:31 np0005604790 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb  2 04:54:31 np0005604790 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb  2 04:54:31 np0005604790 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb  2 04:54:31 np0005604790 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb  2 04:54:31 np0005604790 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb  2 04:54:31 np0005604790 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb  2 04:54:31 np0005604790 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 04:54:31 np0005604790 systemd[1]: Started libvirt QEMU daemon.
Feb  2 04:54:31 np0005604790 setroubleshoot[218801]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 12db1193-7d75-4e25-8514-67b463eddc2a
Feb  2 04:54:31 np0005604790 setroubleshoot[218801]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 04:54:31 np0005604790 setroubleshoot[218801]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 12db1193-7d75-4e25-8514-67b463eddc2a
Feb  2 04:54:31 np0005604790 setroubleshoot[218801]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb  2 04:54:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:54:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:54:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:54:32 np0005604790 python3.9[219202]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:54:32 np0005604790 systemd[1]: Reloading.
Feb  2 04:54:32 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:54:32 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:54:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:32 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:32 np0005604790 systemd[1]: Starting libvirt secret daemon socket...
Feb  2 04:54:32 np0005604790 systemd[1]: Listening on libvirt secret daemon socket.
Feb  2 04:54:32 np0005604790 systemd[1]: Starting libvirt secret daemon admin socket...
Feb  2 04:54:32 np0005604790 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb  2 04:54:32 np0005604790 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb  2 04:54:32 np0005604790 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb  2 04:54:32 np0005604790 systemd[1]: Starting libvirt secret daemon...
Feb  2 04:54:32 np0005604790 systemd[1]: Started libvirt secret daemon.
Feb  2 04:54:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:33 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3040016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:33 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:33.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:33 np0005604790 python3.9[219414]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:54:34 np0005604790 python3.9[219568]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:54:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:34 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:34] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Feb  2 04:54:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:34] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Feb  2 04:54:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:35 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:35 np0005604790 python3.9[219720]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:54:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:35 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:35.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:35 np0005604790 python3.9[219875]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:54:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Feb  2 04:54:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:36 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:36 np0005604790 python3.9[220051]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:37.036Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:37.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:37 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:37 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:37.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:37 np0005604790 podman[220146]: 2026-02-02 09:54:37.432441324 +0000 UTC m=+0.140677871 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:54:37 np0005604790 python3.9[220184]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026076.4066563-3378-131614965622694/.source.xml follow=False _original_basename=secret.xml.j2 checksum=19e72152fe151d80bf9ff9b6a78f27bac75d38a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Feb  2 04:54:38 np0005604790 python3.9[220352]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine d241d473-9fcb-5f74-b163-f1ca4454e7f1#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:54:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:38 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:38.936Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:39.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:39 np0005604790 python3.9[220514]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:39 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:39 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:39.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 170 B/s wr, 1 op/s
Feb  2 04:54:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:40 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:41.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:41 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:41 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:41 np0005604790 python3.9[220979]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:41 np0005604790 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb  2 04:54:41 np0005604790 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.026s CPU time.
Feb  2 04:54:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:42 np0005604790 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb  2 04:54:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:54:42 np0005604790 python3.9[221133]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:42 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:42 np0005604790 python3.9[221256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026081.856479-3543-266070258353150/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:43.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:43 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:43 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:43 np0005604790 python3.9[221409]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:54:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:44 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3040032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:44 np0005604790 python3.9[221562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:44] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Feb  2 04:54:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:44] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Feb  2 04:54:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:45.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:45 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:45 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:45 np0005604790 python3.9[221640]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:54:45.365 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:54:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:54:45.366 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:54:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:54:45.367 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:54:45 np0005604790 python3.9[221793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:54:46 np0005604790 podman[221844]: 2026-02-02 09:54:46.363822462 +0000 UTC m=+0.083505217 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 04:54:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:46 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:46 np0005604790 python3.9[221889]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tz_zihu9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:47.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:54:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:47.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:54:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:47.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:47.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:54:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:54:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:47 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3040032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:47 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:54:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:54:47 np0005604790 python3.9[222043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:47 np0005604790 python3.9[222122]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:54:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:48 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:48 np0005604790 python3.9[222275]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:54:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:48.937Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:49 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:49 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:49.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:49 np0005604790 python3[222428]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb  2 04:54:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:54:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:50 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:50 np0005604790 python3.9[222582]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:51 np0005604790 python3.9[222660]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:51 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:51 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:51 np0005604790 python3.9[222813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:54:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:52 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:52 np0005604790 python3.9[222939]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026091.3050826-3810-247750226783261/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:53 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:53 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:53.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:53 np0005604790 python3.9[223091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:53 np0005604790 python3.9[223170]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095454 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:54:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:54:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:54 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:54 np0005604790 python3.9[223323]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:54] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:54:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:54:54] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:54:55 np0005604790 python3.9[223401]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:55 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:55 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:55 np0005604790 python3.9[223554]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:54:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:54:56 np0005604790 python3.9[223705]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1770026095.3037217-3927-169163496145700/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:56 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc328000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:56 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:54:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:57.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:54:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:54:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:57 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:57 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:57 np0005604790 python3.9[223857]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:58 np0005604790 python3.9[224011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:54:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:54:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:58 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc328000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:54:58.939Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:54:59 np0005604790 python3.9[224166]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:54:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:54:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:54:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:54:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:59 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:54:59 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:54:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:54:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:54:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:54:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:54:59 np0005604790 python3.9[224319]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:55:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:00 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:00 np0005604790 python3.9[224474]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:55:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.041001081s ======
Feb  2 04:55:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.041001081s
Feb  2 04:55:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:01 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:01 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:01 np0005604790 python3.9[224628]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:02 : epoch 69807448 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:55:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:55:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:55:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:55:02 np0005604790 python3.9[224785]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:02 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:03.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:03 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:03 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280089d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:03 np0005604790 python3.9[224937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:03 np0005604790 python3.9[225061]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026102.7121208-4143-186899840386018/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:55:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:04 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:04 np0005604790 python3.9[225214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:55:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:55:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:05 : epoch 69807448 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:55:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:05 : epoch 69807448 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:55:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:05.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:05 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:05 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:05.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:05 np0005604790 python3.9[225337]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026104.0605094-4188-65792271377494/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:06 np0005604790 python3.9[225491]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Feb  2 04:55:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:06 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:06 np0005604790 python3.9[225614]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026105.5692391-4233-274708970324710/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:07.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:07.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:07 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:07 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:07.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:07 np0005604790 python3.9[225799]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:55:07 np0005604790 systemd[1]: Reloading.
Feb  2 04:55:07 np0005604790 podman[225846]: 2026-02-02 09:55:07.731422182 +0000 UTC m=+0.107870758 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 04:55:07 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:55:07 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:55:08 np0005604790 systemd[1]: Reached target edpm_libvirt.target.
Feb  2 04:55:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:08 : epoch 69807448 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:55:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Feb  2 04:55:08 np0005604790 podman[225960]: 2026-02-02 09:55:08.220844092 +0000 UTC m=+0.096111129 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:55:08 np0005604790 podman[225960]: 2026-02-02 09:55:08.35806997 +0000 UTC m=+0.233336987 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:55:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:08 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:08.940Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:09 np0005604790 python3.9[226194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb  2 04:55:09 np0005604790 systemd[1]: Reloading.
Feb  2 04:55:09 np0005604790 podman[226242]: 2026-02-02 09:55:09.01824478 +0000 UTC m=+0.072112377 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:55:09 np0005604790 podman[226242]: 2026-02-02 09:55:09.054802122 +0000 UTC m=+0.108669689 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 04:55:09 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:55:09 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:55:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:09.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:09 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:09 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f40016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:09.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:09 np0005604790 systemd[1]: Reloading.
Feb  2 04:55:09 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:55:09 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:55:09 np0005604790 podman[226345]: 2026-02-02 09:55:09.601891778 +0000 UTC m=+0.079067040 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:09 np0005604790 podman[226345]: 2026-02-02 09:55:09.615763503 +0000 UTC m=+0.092938725 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:09 np0005604790 podman[226479]: 2026-02-02 09:55:09.974053905 +0000 UTC m=+0.065593726 container exec a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:55:10 np0005604790 podman[226479]: 2026-02-02 09:55:10.010337349 +0000 UTC m=+0.101877170 container exec_died a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:55:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:55:10 np0005604790 systemd[1]: session-53.scope: Deactivated successfully.
Feb  2 04:55:10 np0005604790 systemd[1]: session-53.scope: Consumed 3min 18.152s CPU time.
Feb  2 04:55:10 np0005604790 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Feb  2 04:55:10 np0005604790 systemd-logind[793]: Removed session 53.
Feb  2 04:55:10 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:55:10 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:55:10 np0005604790 podman[226564]: 2026-02-02 09:55:10.399787229 +0000 UTC m=+0.081392491 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64)
Feb  2 04:55:10 np0005604790 podman[226564]: 2026-02-02 09:55:10.41694526 +0000 UTC m=+0.098550472 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Feb  2 04:55:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:10 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:10 np0005604790 podman[226631]: 2026-02-02 09:55:10.701252586 +0000 UTC m=+0.078406923 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:10 np0005604790 podman[226631]: 2026-02-02 09:55:10.743899947 +0000 UTC m=+0.121054284 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:10 np0005604790 podman[226704]: 2026-02-02 09:55:10.993814939 +0000 UTC m=+0.070519765 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:55:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:11.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:11 np0005604790 podman[226704]: 2026-02-02 09:55:11.215860078 +0000 UTC m=+0.292564854 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 04:55:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:11 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:11 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:11.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:11 np0005604790 podman[226797]: 2026-02-02 09:55:11.554802381 +0000 UTC m=+0.068365599 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:11 np0005604790 podman[226797]: 2026-02-02 09:55:11.610024633 +0000 UTC m=+0.123587881 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 04:55:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:55:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:55:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:12 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Feb  2 04:55:12 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb  2 04:55:12 np0005604790 podman[227018]: 2026-02-02 09:55:12.782292609 +0000 UTC m=+0.052847711 container create f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 04:55:12 np0005604790 systemd[1]: Started libpod-conmon-f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc.scope.
Feb  2 04:55:12 np0005604790 podman[227018]: 2026-02-02 09:55:12.755754011 +0000 UTC m=+0.026309133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:12 np0005604790 podman[227018]: 2026-02-02 09:55:12.877504413 +0000 UTC m=+0.148059515 container init f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:55:12 np0005604790 podman[227018]: 2026-02-02 09:55:12.887805454 +0000 UTC m=+0.158360526 container start f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:55:12 np0005604790 podman[227018]: 2026-02-02 09:55:12.891070979 +0000 UTC m=+0.161626071 container attach f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:55:12 np0005604790 busy_mahavira[227034]: 167 167
Feb  2 04:55:12 np0005604790 systemd[1]: libpod-f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc.scope: Deactivated successfully.
Feb  2 04:55:12 np0005604790 conmon[227034]: conmon f2683e67d986c7ef57bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc.scope/container/memory.events
Feb  2 04:55:12 np0005604790 podman[227039]: 2026-02-02 09:55:12.95269923 +0000 UTC m=+0.039881180 container died f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb  2 04:55:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-33768f443c838d88ed6e13a9781ff2ed3c500f039ca5eb08624618f526598b7d-merged.mount: Deactivated successfully.
Feb  2 04:55:12 np0005604790 podman[227039]: 2026-02-02 09:55:12.993713779 +0000 UTC m=+0.080895769 container remove f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:55:12 np0005604790 systemd[1]: libpod-conmon-f2683e67d986c7ef57bbf4a70c12db0bd920149b272591a6ff48cda1225136bc.scope: Deactivated successfully.
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.180632804 +0000 UTC m=+0.059480145 container create d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb  2 04:55:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:13.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:13 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:13 np0005604790 systemd[1]: Started libpod-conmon-d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267.scope.
Feb  2 04:55:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:13 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.15768583 +0000 UTC m=+0.036533181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:13 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.304135351 +0000 UTC m=+0.182982682 container init d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 04:55:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:13.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.314279648 +0000 UTC m=+0.193126999 container start d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.317831582 +0000 UTC m=+0.196678963 container attach d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 04:55:13 np0005604790 intelligent_mccarthy[227077]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:55:13 np0005604790 intelligent_mccarthy[227077]: --> All data devices are unavailable
Feb  2 04:55:13 np0005604790 systemd[1]: libpod-d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267.scope: Deactivated successfully.
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.662933086 +0000 UTC m=+0.541780397 container died d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:55:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-18a38a84a74f31b359a593f92a7652628d10abaccb797509e3edb4453c790507-merged.mount: Deactivated successfully.
Feb  2 04:55:13 np0005604790 podman[227061]: 2026-02-02 09:55:13.701993923 +0000 UTC m=+0.580841274 container remove d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mccarthy, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:55:13 np0005604790 systemd[1]: libpod-conmon-d22be1ee7df81a975c961b08d24b86b18334406eafdbf091a6c313b9ac2ec267.scope: Deactivated successfully.
Feb  2 04:55:13 np0005604790 ceph-mon[74489]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Feb  2 04:55:13 np0005604790 ceph-mon[74489]: Cluster is now healthy
Feb  2 04:55:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095514 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:55:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 506 B/s wr, 2 op/s
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.306735935 +0000 UTC m=+0.058725405 container create 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 04:55:14 np0005604790 systemd[1]: Started libpod-conmon-0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd.scope.
Feb  2 04:55:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.283904104 +0000 UTC m=+0.035893614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.381386168 +0000 UTC m=+0.133375638 container init 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.386157583 +0000 UTC m=+0.138147053 container start 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.389512332 +0000 UTC m=+0.141501792 container attach 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 04:55:14 np0005604790 blissful_feistel[227214]: 167 167
Feb  2 04:55:14 np0005604790 systemd[1]: libpod-0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd.scope: Deactivated successfully.
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.391619217 +0000 UTC m=+0.143608697 container died 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:55:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-57722bd9f9f43337b07c791169ed18241c1c858fe692ea24f3526db97f35fc7f-merged.mount: Deactivated successfully.
Feb  2 04:55:14 np0005604790 podman[227197]: 2026-02-02 09:55:14.434950956 +0000 UTC m=+0.186940426 container remove 0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 04:55:14 np0005604790 systemd[1]: libpod-conmon-0e090a73cc7d16b4ea9d75369907b255c0c69b9d44a22ac3ffb2f10b994063fd.scope: Deactivated successfully.
Feb  2 04:55:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:14 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:14 np0005604790 podman[227238]: 2026-02-02 09:55:14.624398448 +0000 UTC m=+0.063603053 container create 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:55:14 np0005604790 systemd[1]: Started libpod-conmon-5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4.scope.
Feb  2 04:55:14 np0005604790 podman[227238]: 2026-02-02 09:55:14.599400221 +0000 UTC m=+0.038604876 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310b09d0f48ae1269d0fdb29020cf67a27d13e43a70b73168ba0379ad41004c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310b09d0f48ae1269d0fdb29020cf67a27d13e43a70b73168ba0379ad41004c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310b09d0f48ae1269d0fdb29020cf67a27d13e43a70b73168ba0379ad41004c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b310b09d0f48ae1269d0fdb29020cf67a27d13e43a70b73168ba0379ad41004c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:14 np0005604790 podman[227238]: 2026-02-02 09:55:14.7210556 +0000 UTC m=+0.160260215 container init 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:55:14 np0005604790 podman[227238]: 2026-02-02 09:55:14.727860269 +0000 UTC m=+0.167064844 container start 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:55:14 np0005604790 podman[227238]: 2026-02-02 09:55:14.732315886 +0000 UTC m=+0.171520461 container attach 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:55:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:55:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]: {
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:    "1": [
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:        {
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "devices": [
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "/dev/loop3"
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            ],
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "lv_name": "ceph_lv0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "lv_size": "21470642176",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "name": "ceph_lv0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "tags": {
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.cluster_name": "ceph",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.crush_device_class": "",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.encrypted": "0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.osd_id": "1",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.type": "block",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.vdo": "0",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:                "ceph.with_tpm": "0"
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            },
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "type": "block",
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:            "vg_name": "ceph_vg0"
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:        }
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]:    ]
Feb  2 04:55:15 np0005604790 xenodochial_jones[227255]: }
Feb  2 04:55:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:15.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:15 np0005604790 systemd[1]: libpod-5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4.scope: Deactivated successfully.
Feb  2 04:55:15 np0005604790 podman[227238]: 2026-02-02 09:55:15.227849047 +0000 UTC m=+0.667053652 container died 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:55:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:15 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:15 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b310b09d0f48ae1269d0fdb29020cf67a27d13e43a70b73168ba0379ad41004c-merged.mount: Deactivated successfully.
Feb  2 04:55:15 np0005604790 podman[227238]: 2026-02-02 09:55:15.283825079 +0000 UTC m=+0.723029674 container remove 5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:55:15 np0005604790 systemd[1]: libpod-conmon-5486ff30822249705b01c7c29a58e3dc7652b14d075867f78c24bc0b1732e3d4.scope: Deactivated successfully.
Feb  2 04:55:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:15.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:15 np0005604790 systemd-logind[793]: New session 54 of user zuul.
Feb  2 04:55:15 np0005604790 systemd[1]: Started Session 54 of User zuul.
Feb  2 04:55:15 np0005604790 podman[227426]: 2026-02-02 09:55:15.943595578 +0000 UTC m=+0.043485434 container create 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:55:15 np0005604790 systemd[1]: Started libpod-conmon-39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd.scope.
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:15.922521914 +0000 UTC m=+0.022411820 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:16.037054596 +0000 UTC m=+0.136944522 container init 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:16.044274415 +0000 UTC m=+0.144164311 container start 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:16.048030784 +0000 UTC m=+0.147920670 container attach 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 04:55:16 np0005604790 wonderful_greider[227444]: 167 167
Feb  2 04:55:16 np0005604790 systemd[1]: libpod-39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd.scope: Deactivated successfully.
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:16.04975266 +0000 UTC m=+0.149642556 container died 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 04:55:16 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0363beeae0037b176879243f6e20a50efe522ee73aa7bc079f7a1e08a4abcb0e-merged.mount: Deactivated successfully.
Feb  2 04:55:16 np0005604790 podman[227426]: 2026-02-02 09:55:16.092992667 +0000 UTC m=+0.192882523 container remove 39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_greider, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:55:16 np0005604790 systemd[1]: libpod-conmon-39d24f9ba0d890330ef828ba19702c21d5bd8455e49f36e497b878186084d4fd.scope: Deactivated successfully.
Feb  2 04:55:16 np0005604790 podman[227469]: 2026-02-02 09:55:16.258789446 +0000 UTC m=+0.043186106 container create cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 04:55:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 506 B/s wr, 2 op/s
Feb  2 04:55:16 np0005604790 systemd[1]: Started libpod-conmon-cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505.scope.
Feb  2 04:55:16 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:55:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0cf9cbed67d29c923e6de97fc330e5fd74f33af4247671c71e3d3588e4ca90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0cf9cbed67d29c923e6de97fc330e5fd74f33af4247671c71e3d3588e4ca90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0cf9cbed67d29c923e6de97fc330e5fd74f33af4247671c71e3d3588e4ca90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:16 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0cf9cbed67d29c923e6de97fc330e5fd74f33af4247671c71e3d3588e4ca90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:55:16 np0005604790 podman[227469]: 2026-02-02 09:55:16.241150913 +0000 UTC m=+0.025547603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:55:16 np0005604790 podman[227469]: 2026-02-02 09:55:16.34866943 +0000 UTC m=+0.133066160 container init cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:55:16 np0005604790 podman[227469]: 2026-02-02 09:55:16.363017217 +0000 UTC m=+0.147413917 container start cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:55:16 np0005604790 podman[227469]: 2026-02-02 09:55:16.367246338 +0000 UTC m=+0.151643088 container attach cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:55:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:16 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:16 np0005604790 podman[227585]: 2026-02-02 09:55:16.552668544 +0000 UTC m=+0.062137655 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb  2 04:55:16 np0005604790 python3.9[227624]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:55:16 np0005604790 lvm[227705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:55:16 np0005604790 lvm[227705]: VG ceph_vg0 finished
Feb  2 04:55:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:17 np0005604790 upbeat_shannon[227532]: {}
Feb  2 04:55:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:17.040Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:55:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:17.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:55:17 np0005604790 podman[227469]: 2026-02-02 09:55:17.059846951 +0000 UTC m=+0.844243601 container died cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:55:17 np0005604790 systemd[1]: libpod-cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505.scope: Deactivated successfully.
Feb  2 04:55:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ac0cf9cbed67d29c923e6de97fc330e5fd74f33af4247671c71e3d3588e4ca90-merged.mount: Deactivated successfully.
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:55:17
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.nfs', '.mgr', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes']
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:55:17 np0005604790 podman[227469]: 2026-02-02 09:55:17.111106249 +0000 UTC m=+0.895502939 container remove cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 04:55:17 np0005604790 systemd[1]: libpod-conmon-cf86f373aa2b0157bfe6d5411df616e5210446da4b198837ed4e8f58039b6505.scope: Deactivated successfully.
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.194339) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117194372, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4208, "num_deletes": 502, "total_data_size": 8567510, "memory_usage": 8677872, "flush_reason": "Manual Compaction"}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb  2 04:55:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:17 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3280096e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:17 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117261604, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8261921, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13180, "largest_seqno": 17387, "table_properties": {"data_size": 8244315, "index_size": 11860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36899, "raw_average_key_size": 19, "raw_value_size": 8207635, "raw_average_value_size": 4393, "num_data_blocks": 518, "num_entries": 1868, "num_filter_entries": 1868, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025678, "oldest_key_time": 1770025678, "file_creation_time": 1770026117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 67399 microseconds, and 11056 cpu microseconds.
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.261733) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8261921 bytes OK
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.261799) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.264294) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.264318) EVENT_LOG_v1 {"time_micros": 1770026117264311, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.264344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8550656, prev total WAL file size 8623618, number of live WAL files 2.
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.266178) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8068KB)], [32(11MB)]
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117266243, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20463814, "oldest_snapshot_seqno": -1}
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:17.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5080 keys, 15279348 bytes, temperature: kUnknown
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117391998, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15279348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15240797, "index_size": 24745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127095, "raw_average_key_size": 25, "raw_value_size": 15143997, "raw_average_value_size": 2981, "num_data_blocks": 1039, "num_entries": 5080, "num_filter_entries": 5080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.392835) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15279348 bytes
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.394624) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.9 rd, 120.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.6 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(4.3) write-amplify(1.8) OK, records in: 6103, records dropped: 1023 output_compression: NoCompression
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.394654) EVENT_LOG_v1 {"time_micros": 1770026117394641, "job": 14, "event": "compaction_finished", "compaction_time_micros": 126372, "compaction_time_cpu_micros": 41734, "output_level": 6, "num_output_files": 1, "total_output_size": 15279348, "num_input_records": 6103, "num_output_records": 5080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117396279, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026117398457, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.266094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.398542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.398548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.398550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.398551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:17.398553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:55:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:55:18 np0005604790 python3.9[227898]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:55:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 506 B/s wr, 2 op/s
Feb  2 04:55:18 np0005604790 network[227915]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:55:18 np0005604790 network[227916]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:55:18 np0005604790 network[227917]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:55:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:18 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:18.942Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:19.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:19 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:19 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc32800a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:19.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:55:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:20 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:21.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:21 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:21 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:21.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:55:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:22 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc32800a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:23.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:23 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:23 np0005604790 python3.9[228193]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb  2 04:55:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:23 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:23.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:24 np0005604790 python3.9[228279]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:55:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:55:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:24 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:24] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:55:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:24] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:55:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:25.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:25 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc32800a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:25 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:25.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:26 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:55:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:27.042Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:55:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:27.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:55:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.004000105s ======
Feb  2 04:55:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:27.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000105s
Feb  2 04:55:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:27 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:27 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:27.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:55:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:28 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:28.943Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:28 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:60518] [POST] [200] [0.004s] [4.0B] [d449fe17-b7c6-46e4-80b6-55e62dbbfafe] /api/prometheus_receiver
Feb  2 04:55:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:55:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:55:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:29 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:29 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:29.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:30 np0005604790 python3.9[228439]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:55:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:30 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:31.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:31 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2f4003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:31 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:31.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:31 np0005604790 python3.9[228591]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:55:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:55:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:32 np0005604790 python3.9[228746]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:55:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:32 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:33 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:33 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308002400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:33 np0005604790 python3.9[228899]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:34 np0005604790 python3.9[229054]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:55:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:34 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:34 np0005604790 python3.9[229177]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026133.5299022-240-65751645485770/.source.iscsi _original_basename=.4672sd75 follow=False checksum=f48f8103ff14d4aa55139d25a7698a45460b803d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:55:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:34] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:55:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:35.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:35 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:35 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:35 np0005604790 python3.9[229329]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:36 np0005604790 python3.9[229483]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:36 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308002ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:37.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:37 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:37 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:37 np0005604790 python3.9[229660]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:55:37 np0005604790 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb  2 04:55:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:55:38 np0005604790 podman[229750]: 2026-02-02 09:55:38.40270544 +0000 UTC m=+0.114714428 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 04:55:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:38 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:38 np0005604790 python3.9[229844]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:55:38 np0005604790 systemd[1]: Reloading.
Feb  2 04:55:38 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:55:38 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:55:39 np0005604790 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 04:55:39 np0005604790 systemd[1]: Starting Open-iSCSI...
Feb  2 04:55:39 np0005604790 kernel: Loading iSCSI transport class v2.0-870.
Feb  2 04:55:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:39 np0005604790 systemd[1]: Started Open-iSCSI.
Feb  2 04:55:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:39 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308002ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:39 np0005604790 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb  2 04:55:39 np0005604790 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb  2 04:55:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:39 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:40 np0005604790 python3.9[230045]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:55:40 np0005604790 network[230062]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:55:40 np0005604790 network[230063]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:55:40 np0005604790 network[230064]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:55:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:40 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:41 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:41 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:41.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.018245) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142018364, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 453, "num_deletes": 250, "total_data_size": 502466, "memory_usage": 511760, "flush_reason": "Manual Compaction"}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142023163, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 400569, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17388, "largest_seqno": 17840, "table_properties": {"data_size": 398101, "index_size": 568, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6412, "raw_average_key_size": 19, "raw_value_size": 393094, "raw_average_value_size": 1202, "num_data_blocks": 26, "num_entries": 327, "num_filter_entries": 327, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026117, "oldest_key_time": 1770026117, "file_creation_time": 1770026142, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 4962 microseconds, and 2322 cpu microseconds.
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.023225) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 400569 bytes OK
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.023252) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.024945) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.024965) EVENT_LOG_v1 {"time_micros": 1770026142024958, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.024995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 499777, prev total WAL file size 499777, number of live WAL files 2.
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.025553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(391KB)], [35(14MB)]
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142025614, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 15679917, "oldest_snapshot_seqno": -1}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4903 keys, 11699063 bytes, temperature: kUnknown
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142119765, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11699063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11665994, "index_size": 19711, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123827, "raw_average_key_size": 25, "raw_value_size": 11576612, "raw_average_value_size": 2361, "num_data_blocks": 819, "num_entries": 4903, "num_filter_entries": 4903, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026142, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.120055) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11699063 bytes
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.121222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.4 rd, 124.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 14.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(68.4) write-amplify(29.2) OK, records in: 5407, records dropped: 504 output_compression: NoCompression
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.121244) EVENT_LOG_v1 {"time_micros": 1770026142121234, "job": 16, "event": "compaction_finished", "compaction_time_micros": 94239, "compaction_time_cpu_micros": 21673, "output_level": 6, "num_output_files": 1, "total_output_size": 11699063, "num_input_records": 5407, "num_output_records": 4903, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142121411, "job": 16, "event": "table_file_deletion", "file_number": 37}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026142122733, "job": 16, "event": "table_file_deletion", "file_number": 35}
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.025434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.122856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.122864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.122866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.122868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:55:42.122871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:55:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:42 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:43 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:43 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:43.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:55:44 np0005604790 python3.9[230340]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:55:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:44 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:55:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:44] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Feb  2 04:55:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:45.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:45 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:45 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:55:45.366 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:55:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:55:45.367 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:55:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:55:45.367 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:55:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:45.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:46 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:55:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:46 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:46 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:55:46 np0005604790 systemd[1]: Reloading.
Feb  2 04:55:46 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:55:46 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:55:46 np0005604790 podman[230360]: 2026-02-02 09:55:46.706738541 +0000 UTC m=+0.105885426 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:55:46 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:55:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:47.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:47 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:55:47 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:55:47 np0005604790 systemd[1]: run-r3a963b6a6f704bc98959298f2a82ac85.service: Deactivated successfully.
Feb  2 04:55:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:55:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:55:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:47.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:47 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:47 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:55:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:55:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:48 np0005604790 python3.9[230680]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 04:55:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Feb  2 04:55:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:48 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3080018c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:48 np0005604790 python3.9[230832]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb  2 04:55:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:49 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:49 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:49.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:49 np0005604790 python3.9[230988]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:50 np0005604790 python3.9[231113]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026149.2193403-504-112728436407410/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:50 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:51 np0005604790 python3.9[231265]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:51 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:51 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:52 np0005604790 python3.9[231419]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:55:52 np0005604790 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 04:55:52 np0005604790 systemd[1]: Stopped Load Kernel Modules.
Feb  2 04:55:52 np0005604790 systemd[1]: Stopping Load Kernel Modules...
Feb  2 04:55:52 np0005604790 systemd[1]: Starting Load Kernel Modules...
Feb  2 04:55:52 np0005604790 systemd[1]: Finished Load Kernel Modules.
Feb  2 04:55:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:52 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc314003780 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:53 np0005604790 python3.9[231575]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:53.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:53 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:53 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:53.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:54 np0005604790 python3.9[231730]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:55:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:55:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:54 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc2fc003400 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:54] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Feb  2 04:55:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:55:54] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Feb  2 04:55:54 np0005604790 python3.9[231882]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:55:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:55:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:55:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:55 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3140037a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:55 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc304004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:55:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:55.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:55 np0005604790 python3.9[232005]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026154.4294302-657-112149453809243/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:55:56 np0005604790 python3.9[232159]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:55:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[216303]: 02/02/2026 09:55:56 : epoch 69807448 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc308001a60 fd 48 proxy ignored for local
Feb  2 04:55:56 np0005604790 kernel: ganesha.nfsd[228771]: segfault at 50 ip 00007fc3aae2f32e sp 00007fc3317f9210 error 4 in libntirpc.so.5.8[7fc3aae14000+2c000] likely on CPU 7 (core 0, socket 7)
Feb  2 04:55:56 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 04:55:56 np0005604790 systemd[1]: Started Process Core Dump (PID 232233/UID 0).
Feb  2 04:55:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:55:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:55:57.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:55:57 np0005604790 python3.9[232339]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:55:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:55:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:57 np0005604790 systemd-coredump[232235]: Process 216307 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007fc3aae2f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 04:55:57 np0005604790 systemd[1]: systemd-coredump@8-232233-0.service: Deactivated successfully.
Feb  2 04:55:57 np0005604790 podman[232429]: 2026-02-02 09:55:57.729054402 +0000 UTC m=+0.050504649 container died a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 04:55:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fb047f23eb9914ed5ff81475fe4db45e835fe5544f8f613533e418c3cccacec4-merged.mount: Deactivated successfully.
Feb  2 04:55:57 np0005604790 podman[232429]: 2026-02-02 09:55:57.828244851 +0000 UTC m=+0.149695078 container remove a24acc6342d6bc9693b214763120368cdc4d2e420b5ed711c3c58144b8e370f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 04:55:57 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 04:55:58 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 04:55:58 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.349s CPU time.
Feb  2 04:55:58 np0005604790 python3.9[232526]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:55:58 np0005604790 python3.9[232693]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:55:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:55:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:55:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:55:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:55:59.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:55:59 np0005604790 python3.9[232845]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:00 np0005604790 python3.9[232999]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:00 np0005604790 python3.9[233151]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:01 np0005604790 python3.9[233303]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:56:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:56:02 np0005604790 python3.9[233459]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:56:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095602 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:56:03 np0005604790 python3.9[233613]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:03.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:04 np0005604790 python3.9[233768]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:04 np0005604790 systemd[1]: Listening on multipathd control socket.
Feb  2 04:56:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:56:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:05 np0005604790 python3.9[233924]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:05 np0005604790 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb  2 04:56:05 np0005604790 udevadm[233929]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb  2 04:56:05 np0005604790 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb  2 04:56:05 np0005604790 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 04:56:05 np0005604790 multipathd[233932]: --------start up--------
Feb  2 04:56:05 np0005604790 multipathd[233932]: read /etc/multipath.conf
Feb  2 04:56:05 np0005604790 multipathd[233932]: path checkers start up
Feb  2 04:56:05 np0005604790 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 04:56:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:05.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:56:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:05.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:56:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:06 np0005604790 python3.9[234093]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb  2 04:56:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:07.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:07 np0005604790 python3.9[234245]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb  2 04:56:07 np0005604790 kernel: Key type psk registered
Feb  2 04:56:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:07.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:07 np0005604790 python3.9[234407]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:56:08 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 9.
Feb  2 04:56:08 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:56:08 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.349s CPU time.
Feb  2 04:56:08 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 04:56:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:56:08 np0005604790 podman[234582]: 2026-02-02 09:56:08.52461555 +0000 UTC m=+0.068075251 container create 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 04:56:08 np0005604790 python3.9[234545]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1770026167.4175339-1047-949635622481/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca62032290309c8223854e77c9a339381b88ee9901a6a74fc3b5ad9f2bcb2a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca62032290309c8223854e77c9a339381b88ee9901a6a74fc3b5ad9f2bcb2a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca62032290309c8223854e77c9a339381b88ee9901a6a74fc3b5ad9f2bcb2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fca62032290309c8223854e77c9a339381b88ee9901a6a74fc3b5ad9f2bcb2a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:08 np0005604790 podman[234582]: 2026-02-02 09:56:08.49913227 +0000 UTC m=+0.042592011 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:08 np0005604790 podman[234582]: 2026-02-02 09:56:08.594888728 +0000 UTC m=+0.138348509 container init 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 04:56:08 np0005604790 podman[234582]: 2026-02-02 09:56:08.609289557 +0000 UTC m=+0.152749258 container start 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 04:56:08 np0005604790 bash[234582]: 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453
Feb  2 04:56:08 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 04:56:08 np0005604790 podman[234595]: 2026-02-02 09:56:08.668496444 +0000 UTC m=+0.102788894 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 04:56:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:56:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:09.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:09 np0005604790 python3.9[234817]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:10 np0005604790 python3.9[234971]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:56:10 np0005604790 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  2 04:56:10 np0005604790 systemd[1]: Stopped Load Kernel Modules.
Feb  2 04:56:10 np0005604790 systemd[1]: Stopping Load Kernel Modules...
Feb  2 04:56:10 np0005604790 systemd[1]: Starting Load Kernel Modules...
Feb  2 04:56:10 np0005604790 systemd[1]: Finished Load Kernel Modules.
Feb  2 04:56:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:11.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:12 np0005604790 python3.9[235129]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb  2 04:56:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:13.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:13.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:56:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:14 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:56:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:14 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:56:14 np0005604790 systemd[1]: Reloading.
Feb  2 04:56:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:14 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:56:14 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:56:15 np0005604790 systemd[1]: Reloading.
Feb  2 04:56:15 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:56:15 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:56:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:15.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:15 np0005604790 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Feb  2 04:56:15 np0005604790 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  2 04:56:15 np0005604790 lvm[235245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:56:15 np0005604790 lvm[235245]: VG ceph_vg0 finished
Feb  2 04:56:15 np0005604790 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb  2 04:56:15 np0005604790 systemd[1]: Starting man-db-cache-update.service...
Feb  2 04:56:15 np0005604790 systemd[1]: Reloading.
Feb  2 04:56:15 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:56:15 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:56:16 np0005604790 systemd[1]: Queuing reload/restart jobs for marked units…
Feb  2 04:56:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:56:16 np0005604790 podman[236596]: 2026-02-02 09:56:16.980251579 +0000 UTC m=+0.052647236 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 04:56:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:17.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:17 np0005604790 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb  2 04:56:17 np0005604790 systemd[1]: Finished man-db-cache-update.service.
Feb  2 04:56:17 np0005604790 systemd[1]: man-db-cache-update.service: Consumed 1.405s CPU time.
Feb  2 04:56:17 np0005604790 systemd[1]: run-rff314c477bd6435aa14a60c8b2883442.service: Deactivated successfully.
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:56:17
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.log', '.nfs', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms']
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:56:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:56:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:17.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:17 np0005604790 python3.9[236640]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:56:17 np0005604790 systemd[1]: Stopping Open-iSCSI...
Feb  2 04:56:17 np0005604790 iscsid[229884]: iscsid shutting down.
Feb  2 04:56:17 np0005604790 systemd[1]: iscsid.service: Deactivated successfully.
Feb  2 04:56:17 np0005604790 systemd[1]: Stopped Open-iSCSI.
Feb  2 04:56:17 np0005604790 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb  2 04:56:17 np0005604790 systemd[1]: Starting Open-iSCSI...
Feb  2 04:56:17 np0005604790 systemd[1]: Started Open-iSCSI.
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:56:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:56:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:18 np0005604790 python3.9[236863]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:56:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:18 np0005604790 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb  2 04:56:18 np0005604790 multipathd[233932]: exit (signal)
Feb  2 04:56:18 np0005604790 multipathd[233932]: --------shut down-------
Feb  2 04:56:18 np0005604790 systemd[1]: multipathd.service: Deactivated successfully.
Feb  2 04:56:18 np0005604790 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb  2 04:56:18 np0005604790 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb  2 04:56:18 np0005604790 multipathd[236888]: --------start up--------
Feb  2 04:56:18 np0005604790 multipathd[236888]: read /etc/multipath.conf
Feb  2 04:56:18 np0005604790 multipathd[236888]: path checkers start up
Feb  2 04:56:18 np0005604790 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:18 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.05782887 +0000 UTC m=+0.066604342 container create 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:56:19 np0005604790 systemd[1]: Started libpod-conmon-707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5.scope.
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.026808144 +0000 UTC m=+0.035583666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:19 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.13883577 +0000 UTC m=+0.147611272 container init 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.147473597 +0000 UTC m=+0.156249069 container start 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.151470473 +0000 UTC m=+0.160245965 container attach 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:56:19 np0005604790 laughing_swartz[237149]: 167 167
Feb  2 04:56:19 np0005604790 systemd[1]: libpod-707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5.scope: Deactivated successfully.
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.155416696 +0000 UTC m=+0.164192188 container died 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 04:56:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay-49b33017801c20e08bfbcd2a7eaea6ffceae5634a0f1d9a20786291a1babeac5-merged.mount: Deactivated successfully.
Feb  2 04:56:19 np0005604790 podman[237134]: 2026-02-02 09:56:19.204684962 +0000 UTC m=+0.213460424 container remove 707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_swartz, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:56:19 np0005604790 systemd[1]: libpod-conmon-707838be51ad980eb5b76081c5f757d3b6c84142641c20fa7b5ca55fee2b49d5.scope: Deactivated successfully.
Feb  2 04:56:19 np0005604790 python3.9[237132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb  2 04:56:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.359153514 +0000 UTC m=+0.045247731 container create 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Feb  2 04:56:19 np0005604790 systemd[1]: Started libpod-conmon-15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be.scope.
Feb  2 04:56:19 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.335596674 +0000 UTC m=+0.021690901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.460984952 +0000 UTC m=+0.147079229 container init 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.468711205 +0000 UTC m=+0.154805422 container start 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.472683289 +0000 UTC m=+0.158777566 container attach 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:56:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:19 np0005604790 quizzical_villani[237194]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:56:19 np0005604790 quizzical_villani[237194]: --> All data devices are unavailable
Feb  2 04:56:19 np0005604790 systemd[1]: libpod-15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be.scope: Deactivated successfully.
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.851019958 +0000 UTC m=+0.537114185 container died 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 04:56:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay-80f882603e870d53599a20594aef480358c12592736c58615cfa0ec443dfb9f0-merged.mount: Deactivated successfully.
Feb  2 04:56:19 np0005604790 podman[237178]: 2026-02-02 09:56:19.902172703 +0000 UTC m=+0.588266930 container remove 15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:56:19 np0005604790 systemd[1]: libpod-conmon-15ffe96330d9e89bf0f9e20a9c2d1e635b42d5b420b74a05880f4913645c79be.scope: Deactivated successfully.
Feb  2 04:56:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:20 np0005604790 python3.9[237426]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.485445991 +0000 UTC m=+0.047641524 container create 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:56:20 np0005604790 systemd[1]: Started libpod-conmon-0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d.scope.
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.46029706 +0000 UTC m=+0.022492643 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.590605856 +0000 UTC m=+0.152801449 container init 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.599804558 +0000 UTC m=+0.162000091 container start 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:56:20 np0005604790 interesting_banach[237508]: 167 167
Feb  2 04:56:20 np0005604790 systemd[1]: libpod-0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d.scope: Deactivated successfully.
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.607371487 +0000 UTC m=+0.169567050 container attach 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.608090666 +0000 UTC m=+0.170286189 container died 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:56:20 np0005604790 systemd[1]: var-lib-containers-storage-overlay-dfb3bab2a9da38e17d1843335bc84b4eb0b67bd5f92bf9ced318bf64e6b07ded-merged.mount: Deactivated successfully.
Feb  2 04:56:20 np0005604790 podman[237476]: 2026-02-02 09:56:20.647206765 +0000 UTC m=+0.209402298 container remove 0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banach, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 04:56:20 np0005604790 systemd[1]: libpod-conmon-0d19cfedb2abf3b672e9e031a5a6f779305db7b74b2bbfa1fa3e5033b42de22d.scope: Deactivated successfully.
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 04:56:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 04:56:20 np0005604790 podman[237534]: 2026-02-02 09:56:20.838402962 +0000 UTC m=+0.073995316 container create e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:56:20 np0005604790 podman[237534]: 2026-02-02 09:56:20.799602062 +0000 UTC m=+0.035194536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:20 np0005604790 systemd[1]: Started libpod-conmon-e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba.scope.
Feb  2 04:56:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64374ea2cbb930442c24097c7bb7571f71c8ce46756e26445aa995f2b73b8064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64374ea2cbb930442c24097c7bb7571f71c8ce46756e26445aa995f2b73b8064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64374ea2cbb930442c24097c7bb7571f71c8ce46756e26445aa995f2b73b8064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64374ea2cbb930442c24097c7bb7571f71c8ce46756e26445aa995f2b73b8064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:20 np0005604790 podman[237534]: 2026-02-02 09:56:20.973922676 +0000 UTC m=+0.209515090 container init e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:56:20 np0005604790 podman[237534]: 2026-02-02 09:56:20.984762641 +0000 UTC m=+0.220355005 container start e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 04:56:20 np0005604790 podman[237534]: 2026-02-02 09:56:20.992890095 +0000 UTC m=+0.228482509 container attach e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 04:56:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]: {
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:    "1": [
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:        {
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "devices": [
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "/dev/loop3"
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            ],
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "lv_name": "ceph_lv0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "lv_size": "21470642176",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "name": "ceph_lv0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "tags": {
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.cluster_name": "ceph",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.crush_device_class": "",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.encrypted": "0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.osd_id": "1",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.type": "block",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.vdo": "0",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:                "ceph.with_tpm": "0"
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            },
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "type": "block",
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:            "vg_name": "ceph_vg0"
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:        }
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]:    ]
Feb  2 04:56:21 np0005604790 cool_wilbur[237572]: }
Feb  2 04:56:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:21 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:21 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:21 np0005604790 systemd[1]: libpod-e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba.scope: Deactivated successfully.
Feb  2 04:56:21 np0005604790 podman[237534]: 2026-02-02 09:56:21.343061653 +0000 UTC m=+0.578654017 container died e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:56:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-64374ea2cbb930442c24097c7bb7571f71c8ce46756e26445aa995f2b73b8064-merged.mount: Deactivated successfully.
Feb  2 04:56:21 np0005604790 podman[237534]: 2026-02-02 09:56:21.403760699 +0000 UTC m=+0.639353053 container remove e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_wilbur, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 04:56:21 np0005604790 systemd[1]: libpod-conmon-e5824e29c3f873f9299761267648b1a1bec54356576388af4fa8da3383ad68ba.scope: Deactivated successfully.
Feb  2 04:56:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:21 np0005604790 python3.9[237697]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:56:21 np0005604790 systemd[1]: Reloading.
Feb  2 04:56:21 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:56:21 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.021967175 +0000 UTC m=+0.061905489 container create 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 04:56:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:22 np0005604790 systemd[1]: Started libpod-conmon-86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b.scope.
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:21.994663647 +0000 UTC m=+0.034602001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.145676538 +0000 UTC m=+0.185614902 container init 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.153515494 +0000 UTC m=+0.193453828 container start 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Feb  2 04:56:22 np0005604790 systemd[1]: libpod-86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b.scope: Deactivated successfully.
Feb  2 04:56:22 np0005604790 objective_tesla[237884]: 167 167
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.164421671 +0000 UTC m=+0.204359995 container attach 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 04:56:22 np0005604790 conmon[237884]: conmon 86b3ef91ba39f6dc0f36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b.scope/container/memory.events
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.165238802 +0000 UTC m=+0.205177146 container died 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:22 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fd9a62e3891e02713d27f1a8dbf617d33d38dd4bbbdbc27928c0e64063fab5cf-merged.mount: Deactivated successfully.
Feb  2 04:56:22 np0005604790 podman[237844]: 2026-02-02 09:56:22.223224427 +0000 UTC m=+0.263162731 container remove 86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:56:22 np0005604790 systemd[1]: libpod-conmon-86b3ef91ba39f6dc0f3650d8cb2b5ac9ed2309a2f2f2d4cb8a1372abe9b6903b.scope: Deactivated successfully.
Feb  2 04:56:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:22 np0005604790 podman[237983]: 2026-02-02 09:56:22.372587754 +0000 UTC m=+0.050724294 container create e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Feb  2 04:56:22 np0005604790 systemd[1]: Started libpod-conmon-e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6.scope.
Feb  2 04:56:22 np0005604790 podman[237983]: 2026-02-02 09:56:22.347915646 +0000 UTC m=+0.026052236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:56:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:56:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c218b28cb3dd015799993229986956d3ea28680abf85d28bab75a073ddf218/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c218b28cb3dd015799993229986956d3ea28680abf85d28bab75a073ddf218/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c218b28cb3dd015799993229986956d3ea28680abf85d28bab75a073ddf218/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77c218b28cb3dd015799993229986956d3ea28680abf85d28bab75a073ddf218/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:56:22 np0005604790 podman[237983]: 2026-02-02 09:56:22.481813527 +0000 UTC m=+0.159950067 container init e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:56:22 np0005604790 podman[237983]: 2026-02-02 09:56:22.48764536 +0000 UTC m=+0.165781860 container start e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 04:56:22 np0005604790 podman[237983]: 2026-02-02 09:56:22.49105919 +0000 UTC m=+0.169195750 container attach e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:22 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:22 np0005604790 python3.9[238049]: ansible-ansible.builtin.service_facts Invoked
Feb  2 04:56:22 np0005604790 network[238071]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb  2 04:56:22 np0005604790 network[238073]: 'network-scripts' will be removed from distribution in near future.
Feb  2 04:56:22 np0005604790 network[238074]: It is advised to switch to 'NetworkManager' instead for network management.
Feb  2 04:56:23 np0005604790 objective_lehmann[238050]: {}
Feb  2 04:56:23 np0005604790 podman[237983]: 2026-02-02 09:56:23.236639646 +0000 UTC m=+0.914776196 container died e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 04:56:23 np0005604790 systemd[1]: libpod-e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6.scope: Deactivated successfully.
Feb  2 04:56:23 np0005604790 systemd[1]: libpod-e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6.scope: Consumed 1.138s CPU time.
Feb  2 04:56:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-77c218b28cb3dd015799993229986956d3ea28680abf85d28bab75a073ddf218-merged.mount: Deactivated successfully.
Feb  2 04:56:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:23 np0005604790 podman[237983]: 2026-02-02 09:56:23.293643255 +0000 UTC m=+0.971779795 container remove e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_lehmann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:56:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:23 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:23 np0005604790 systemd[1]: libpod-conmon-e404147bb34d4266a480964055350d66cf69fa3930521b66688dbcf717231dd6.scope: Deactivated successfully.
Feb  2 04:56:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:23 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:23 np0005604790 lvm[238165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:56:23 np0005604790 lvm[238165]: VG ceph_vg0 finished
Feb  2 04:56:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:56:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:56:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Feb  2 04:56:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:24 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:56:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095624 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:56:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:24 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:24] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:56:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:24] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Feb  2 04:56:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:25 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:25 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Feb  2 04:56:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:26 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:27.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:27 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:27 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 426 B/s wr, 2 op/s
Feb  2 04:56:28 np0005604790 python3.9[238461]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:28 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8001f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:29 np0005604790 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb  2 04:56:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:29.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:29 np0005604790 python3.9[238614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:30 np0005604790 python3.9[238770]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:56:30 np0005604790 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb  2 04:56:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:30 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:30 np0005604790 python3.9[238924]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:31.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:31.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:31 np0005604790 python3.9[239077]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:56:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:56:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:56:32 np0005604790 python3.9[239232]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:32 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8008f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:33 np0005604790 python3.9[239385]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:33.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:56:34 np0005604790 python3.9[239540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:56:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:34 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:34] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:56:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:34] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:56:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:35 np0005604790 python3.9[239693]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:35 np0005604790 python3.9[239846]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:56:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:36 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8009860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:36 np0005604790 python3.9[240022]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:37.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:37.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:37 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:37 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:37 np0005604790 python3.9[240176]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095637 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:56:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:38 np0005604790 python3.9[240330]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 04:56:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:38 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:38 np0005604790 podman[240454]: 2026-02-02 09:56:38.892662881 +0000 UTC m=+0.162776586 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Feb  2 04:56:39 np0005604790 python3.9[240496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:39.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:39 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8009860 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:39 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c0032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:39 np0005604790 python3.9[240661]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:40 np0005604790 python3.9[240814]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:40 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:40 np0005604790 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  2 04:56:40 np0005604790 systemd[1]: virtqemud.service: Deactivated successfully.
Feb  2 04:56:41 np0005604790 python3.9[240968]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:41.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:41 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:41 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800a180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:41.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:41 np0005604790 python3.9[241121]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:42 np0005604790 python3.9[241274]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:42 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:43 np0005604790 python3.9[241426]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:43 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:43 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:43.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:44 np0005604790 python3.9[241579]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:56:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:44 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800a180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:44 np0005604790 python3.9[241732]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:44] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:56:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:44] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:56:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:56:45 np0005604790 python3.9[241884]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:45.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:56:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:45 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:45 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:56:45.368 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:56:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:56:45.368 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:56:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:56:45.369 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:56:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:45.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:46 np0005604790 python3.9[242037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:56:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:46 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:56:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:56:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:46 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:46 np0005604790 python3.9[242190]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:47.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:56:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:56:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:56:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:47.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:47 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800a180 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:47 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:47 np0005604790 podman[242269]: 2026-02-02 09:56:47.34842303 +0000 UTC m=+0.064015048 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb  2 04:56:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:47.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:47 np0005604790 python3.9[242363]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb  2 04:56:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:56:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:48 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:49 np0005604790 python3.9[242516]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:56:49 np0005604790 systemd[1]: Reloading.
Feb  2 04:56:49 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:56:49 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:56:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:56:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:56:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:49.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:50 np0005604790 python3.9[242705]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Feb  2 04:56:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:50 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:50 np0005604790 python3.9[242858]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:51.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:51 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:51 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:51 np0005604790 python3.9[243011]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:51.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:52 np0005604790 python3.9[243166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:52 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:56:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:56:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:52 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:52 np0005604790 python3.9[243320]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:53.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:53 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:53 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:53.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:53 np0005604790 python3.9[243473]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:54 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:56:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:56:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:55.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:55 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:55 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:55 np0005604790 python3.9[243628]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:56 np0005604790 python3.9[243783]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb  2 04:56:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:56 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:56:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:57.050Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:56:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:56:57.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:56:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:56:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:57.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:56:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:57 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:57 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095657 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 2ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:56:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:57.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:56:57 np0005604790 python3.9[243962]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:56:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:56:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:58 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:58 np0005604790 python3.9[244115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:56:59 np0005604790 python3.9[244267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:56:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:56:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:56:59.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:56:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:59 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:56:59 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:56:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:56:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:56:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:56:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:00 np0005604790 python3.9[244421]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:57:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:00 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:00 np0005604790 python3.9[244573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:01.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:01 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:01 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:01 np0005604790 python3.9[244725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:57:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:57:02 np0005604790 python3.9[244879]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:57:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:02 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:02 np0005604790 python3.9[245031]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:03 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:03 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:03 np0005604790 python3.9[245183]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095703 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 04:57:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:04 np0005604790 python3.9[245337]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:57:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:04 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:04] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:05 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:05 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:57:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:06 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:07.052Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:57:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:07 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:07 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Feb  2 04:57:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:08 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:09 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:09 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:09 np0005604790 podman[245366]: 2026-02-02 09:57:09.398583173 +0000 UTC m=+0.111255550 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:57:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:10 np0005604790 python3.9[245522]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb  2 04:57:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:57:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:10 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:11 np0005604790 python3.9[245675]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb  2 04:57:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:11.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:11 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:11 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:11.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:12 np0005604790 python3.9[245835]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb  2 04:57:12 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:57:12 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:57:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 04:57:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:12 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:12 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 04:57:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:13 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:13 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:14 np0005604790 systemd-logind[793]: New session 55 of user zuul.
Feb  2 04:57:14 np0005604790 systemd[1]: Started Session 55 of User zuul.
Feb  2 04:57:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:57:14 np0005604790 systemd[1]: session-55.scope: Deactivated successfully.
Feb  2 04:57:14 np0005604790 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Feb  2 04:57:14 np0005604790 systemd-logind[793]: Removed session 55.
Feb  2 04:57:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:14 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0003ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:14] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:15 np0005604790 python3.9[246024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:15 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:15.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:15 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 04:57:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 04:57:15 np0005604790 python3.9[246145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026234.7877471-2654-163566656978486/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:15 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 04:57:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:15 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 04:57:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Feb  2 04:57:16 np0005604790 python3.9[246297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:16 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:16 np0005604790 python3.9[246373]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:17.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:57:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:17.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_09:57:17
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.nfs', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data']
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 04:57:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:57:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:17.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:17 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:17 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:57:17 np0005604790 python3.9[246548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 04:57:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 04:57:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:17 np0005604790 podman[246645]: 2026-02-02 09:57:17.758985445 +0000 UTC m=+0.072275976 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:57:17 np0005604790 python3.9[246681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026236.9120324-2654-173834251089563/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:57:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:18 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:18 np0005604790 python3.9[246841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:18 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 04:57:19 np0005604790 python3.9[246962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026238.0385654-2654-217839403562318/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:19 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:19 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:19 np0005604790 python3.9[247113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:57:20 np0005604790 python3.9[247235]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026239.4683983-2654-266850087728230/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:20 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:21 np0005604790 python3.9[247385]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:21 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:21 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:21.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:21 np0005604790 python3.9[247506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026240.7074091-2654-139062102723534/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Feb  2 04:57:22 np0005604790 python3.9[247660]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:57:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:22 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:23 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:23 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:23 np0005604790 python3.9[247812]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:57:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/095723 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 04:57:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:23.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:24 np0005604790 python3.9[248016]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 04:57:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 04:57:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:24 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:24] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Feb  2 04:57:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:24] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Feb  2 04:57:24 np0005604790 python3.9[248249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.049081696 +0000 UTC m=+0.046498174 container create e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 04:57:25 np0005604790 systemd[1]: Started libpod-conmon-e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c.scope.
Feb  2 04:57:25 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.114563402 +0000 UTC m=+0.111979900 container init e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.120444308 +0000 UTC m=+0.117860786 container start e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.123730495 +0000 UTC m=+0.121146973 container attach e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:57:25 np0005604790 silly_burnell[248341]: 167 167
Feb  2 04:57:25 np0005604790 systemd[1]: libpod-e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c.scope: Deactivated successfully.
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.124445224 +0000 UTC m=+0.121861702 container died e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.032350422 +0000 UTC m=+0.029766920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:25 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d195d3f3f75e17133a2ed8038dd20b3fb52e94b8c61252b36a0f578c4dad649b-merged.mount: Deactivated successfully.
Feb  2 04:57:25 np0005604790 podman[248290]: 2026-02-02 09:57:25.180627033 +0000 UTC m=+0.178043541 container remove e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_burnell, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 04:57:25 np0005604790 systemd[1]: libpod-conmon-e02a159d2f2be67fb4ad088af3849ed1e14a81b58120b4ebca81907db03df30c.scope: Deactivated successfully.
Feb  2 04:57:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:25 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:25 np0005604790 podman[248427]: 2026-02-02 09:57:25.356934526 +0000 UTC m=+0.049941794 container create 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:57:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:25 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:25 np0005604790 systemd[1]: Started libpod-conmon-21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70.scope.
Feb  2 04:57:25 np0005604790 podman[248427]: 2026-02-02 09:57:25.331251116 +0000 UTC m=+0.024258414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:25 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:25 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 04:57:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 04:57:25 np0005604790 podman[248427]: 2026-02-02 09:57:25.49062186 +0000 UTC m=+0.183629148 container init 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 04:57:25 np0005604790 podman[248427]: 2026-02-02 09:57:25.496959128 +0000 UTC m=+0.189966386 container start 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 04:57:25 np0005604790 podman[248427]: 2026-02-02 09:57:25.500134812 +0000 UTC m=+0.193142090 container attach 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 04:57:25 np0005604790 python3.9[248467]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1770026244.4857855-2975-117106545687472/.source _original_basename=.69sn_alc follow=False checksum=3df8d806ae033842f0da2f11b3060e95a7a0b54b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb  2 04:57:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:25.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:25 np0005604790 dreamy_wozniak[248470]: --> passed data devices: 0 physical, 1 LVM
Feb  2 04:57:25 np0005604790 dreamy_wozniak[248470]: --> All data devices are unavailable
Feb  2 04:57:25 np0005604790 systemd[1]: libpod-21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70.scope: Deactivated successfully.
Feb  2 04:57:25 np0005604790 podman[248513]: 2026-02-02 09:57:25.879704724 +0000 UTC m=+0.037376472 container died 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:57:25 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5a317e1e6f8e7589128e2de806369d77ec1ea3969405a6b8e758520fc75ac106-merged.mount: Deactivated successfully.
Feb  2 04:57:25 np0005604790 podman[248513]: 2026-02-02 09:57:25.9293612 +0000 UTC m=+0.087032918 container remove 21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_wozniak, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 04:57:25 np0005604790 systemd[1]: libpod-conmon-21d349c48c9837d8536a01e46287ac86bca292f237344f111939dcff44cafa70.scope: Deactivated successfully.
Feb  2 04:57:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.523884728 +0000 UTC m=+0.046836812 container create 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 04:57:26 np0005604790 python3.9[248716]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:26 np0005604790 systemd[1]: Started libpod-conmon-9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c.scope.
Feb  2 04:57:26 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.501728861 +0000 UTC m=+0.024681015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.606661603 +0000 UTC m=+0.129613767 container init 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.611996494 +0000 UTC m=+0.134948598 container start 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.615700422 +0000 UTC m=+0.138652536 container attach 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:57:26 np0005604790 dreamy_jepsen[248763]: 167 167
Feb  2 04:57:26 np0005604790 systemd[1]: libpod-9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c.scope: Deactivated successfully.
Feb  2 04:57:26 np0005604790 conmon[248763]: conmon 9396fd97b1700fa9762f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c.scope/container/memory.events
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.619129443 +0000 UTC m=+0.142081547 container died 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 04:57:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:26 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a699956f37ece7b191158b48932825683d6e79b68911aa0931e2e655fa67d04f-merged.mount: Deactivated successfully.
Feb  2 04:57:26 np0005604790 podman[248745]: 2026-02-02 09:57:26.663606222 +0000 UTC m=+0.186558326 container remove 9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_jepsen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 04:57:26 np0005604790 systemd[1]: libpod-conmon-9396fd97b1700fa9762ff7813989d99fa2bd704bf87629cc42e4ec0652686a2c.scope: Deactivated successfully.
Feb  2 04:57:26 np0005604790 podman[248811]: 2026-02-02 09:57:26.833751622 +0000 UTC m=+0.056958721 container create ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 04:57:26 np0005604790 systemd[1]: Started libpod-conmon-ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb.scope.
Feb  2 04:57:26 np0005604790 podman[248811]: 2026-02-02 09:57:26.810365862 +0000 UTC m=+0.033573031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:26 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2879622914d1138b98dda8760fdfcb46370f74a452782fb237ac3c820083b65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2879622914d1138b98dda8760fdfcb46370f74a452782fb237ac3c820083b65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2879622914d1138b98dda8760fdfcb46370f74a452782fb237ac3c820083b65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2879622914d1138b98dda8760fdfcb46370f74a452782fb237ac3c820083b65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:26 np0005604790 podman[248811]: 2026-02-02 09:57:26.925281869 +0000 UTC m=+0.148489048 container init ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:57:26 np0005604790 podman[248811]: 2026-02-02 09:57:26.934602326 +0000 UTC m=+0.157809455 container start ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:57:26 np0005604790 podman[248811]: 2026-02-02 09:57:26.938281123 +0000 UTC m=+0.161488242 container attach ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 04:57:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:27.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]: {
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:    "1": [
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:        {
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "devices": [
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "/dev/loop3"
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            ],
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "lv_name": "ceph_lv0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "lv_size": "21470642176",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "name": "ceph_lv0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "tags": {
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.cephx_lockbox_secret": "",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.cluster_name": "ceph",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.crush_device_class": "",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.encrypted": "0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.osd_id": "1",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.type": "block",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.vdo": "0",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:                "ceph.with_tpm": "0"
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            },
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "type": "block",
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:            "vg_name": "ceph_vg0"
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:        }
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]:    ]
Feb  2 04:57:27 np0005604790 objective_chaplygin[248873]: }
Feb  2 04:57:27 np0005604790 systemd[1]: libpod-ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb.scope: Deactivated successfully.
Feb  2 04:57:27 np0005604790 podman[248811]: 2026-02-02 09:57:27.242404755 +0000 UTC m=+0.465611884 container died ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 04:57:27 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b2879622914d1138b98dda8760fdfcb46370f74a452782fb237ac3c820083b65-merged.mount: Deactivated successfully.
Feb  2 04:57:27 np0005604790 podman[248811]: 2026-02-02 09:57:27.293643203 +0000 UTC m=+0.516850322 container remove ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:57:27 np0005604790 systemd[1]: libpod-conmon-ad4d61fb59f194ce49d665b855d69fbe25f5e270449a4c7bc17fa7a6b11075fb.scope: Deactivated successfully.
Feb  2 04:57:27 np0005604790 python3.9[248959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:27 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:27 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:27.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:27 np0005604790 python3.9[249157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026246.8256392-3053-10459506881251/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aafdeb4849f80b4aa3d95767e2f1397576892cd0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.878293131 +0000 UTC m=+0.065256211 container create 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 04:57:27 np0005604790 systemd[1]: Started libpod-conmon-554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1.scope.
Feb  2 04:57:27 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.848395198 +0000 UTC m=+0.035358318 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.946065447 +0000 UTC m=+0.133028547 container init 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.951532592 +0000 UTC m=+0.138495662 container start 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.954676985 +0000 UTC m=+0.141640085 container attach 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 04:57:27 np0005604790 flamboyant_driscoll[249215]: 167 167
Feb  2 04:57:27 np0005604790 systemd[1]: libpod-554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1.scope: Deactivated successfully.
Feb  2 04:57:27 np0005604790 podman[249186]: 2026-02-02 09:57:27.956474523 +0000 UTC m=+0.143437623 container died 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 04:57:27 np0005604790 systemd[1]: var-lib-containers-storage-overlay-42866aae472ed91b9bb96150b87818e319da6e70bbacfb7794e4c9cf710ed3d8-merged.mount: Deactivated successfully.
Feb  2 04:57:28 np0005604790 podman[249186]: 2026-02-02 09:57:28.00313129 +0000 UTC m=+0.190094360 container remove 554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_driscoll, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 04:57:28 np0005604790 systemd[1]: libpod-conmon-554eccc1bb9a9f8d57b80711d039433cf0fd4a18903b8bb468d81a09dd1983c1.scope: Deactivated successfully.
Feb  2 04:57:28 np0005604790 podman[249252]: 2026-02-02 09:57:28.14912001 +0000 UTC m=+0.048587279 container create e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 04:57:28 np0005604790 systemd[1]: Started libpod-conmon-e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e.scope.
Feb  2 04:57:28 np0005604790 podman[249252]: 2026-02-02 09:57:28.127607789 +0000 UTC m=+0.027075108 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 04:57:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd5215903e713a7378c82cd98ffcec0b226dff1056bc2beb710fd2eed460cfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd5215903e713a7378c82cd98ffcec0b226dff1056bc2beb710fd2eed460cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd5215903e713a7378c82cd98ffcec0b226dff1056bc2beb710fd2eed460cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd5215903e713a7378c82cd98ffcec0b226dff1056bc2beb710fd2eed460cfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:28 np0005604790 podman[249252]: 2026-02-02 09:57:28.25324136 +0000 UTC m=+0.152708619 container init e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:57:28 np0005604790 podman[249252]: 2026-02-02 09:57:28.26344571 +0000 UTC m=+0.162912979 container start e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 04:57:28 np0005604790 podman[249252]: 2026-02-02 09:57:28.279326511 +0000 UTC m=+0.178793770 container attach e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 04:57:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Feb  2 04:57:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:28 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:28 np0005604790 python3.9[249396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb  2 04:57:29 np0005604790 lvm[249587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 04:57:29 np0005604790 lvm[249587]: VG ceph_vg0 finished
Feb  2 04:57:29 np0005604790 angry_sutherland[249318]: {}
Feb  2 04:57:29 np0005604790 systemd[1]: libpod-e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e.scope: Deactivated successfully.
Feb  2 04:57:29 np0005604790 systemd[1]: libpod-e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e.scope: Consumed 1.186s CPU time.
Feb  2 04:57:29 np0005604790 podman[249252]: 2026-02-02 09:57:29.113702538 +0000 UTC m=+1.013169817 container died e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 04:57:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-afd5215903e713a7378c82cd98ffcec0b226dff1056bc2beb710fd2eed460cfb-merged.mount: Deactivated successfully.
Feb  2 04:57:29 np0005604790 python3.9[249585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1770026248.1274183-3098-256425764988923/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=a1f1b826d995a314b6b973b7452c5ae4777408c1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb  2 04:57:29 np0005604790 podman[249252]: 2026-02-02 09:57:29.179462932 +0000 UTC m=+1.078930201 container remove e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 04:57:29 np0005604790 systemd[1]: libpod-conmon-e1a67b576fd021a808ff8e8667b480eb22b38b4696e345bf5060da86f704dd9e.scope: Deactivated successfully.
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:29.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:29 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 04:57:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:30 np0005604790 python3.9[249783]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Feb  2 04:57:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:57:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:30 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:31 np0005604790 python3.9[249935]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 04:57:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20780021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:31.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.061248) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252061313, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1164, "num_deletes": 254, "total_data_size": 2078032, "memory_usage": 2106232, "flush_reason": "Manual Compaction"}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252084483, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2037435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17841, "largest_seqno": 19004, "table_properties": {"data_size": 2031933, "index_size": 2897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11090, "raw_average_key_size": 18, "raw_value_size": 2021027, "raw_average_value_size": 3390, "num_data_blocks": 130, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026143, "oldest_key_time": 1770026143, "file_creation_time": 1770026252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 23332 microseconds, and 5571 cpu microseconds.
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.084588) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2037435 bytes OK
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.084615) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.086590) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.086611) EVENT_LOG_v1 {"time_micros": 1770026252086604, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.086637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2072870, prev total WAL file size 2072870, number of live WAL files 2.
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.087316) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1989KB)], [38(11MB)]
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252087404, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 13736498, "oldest_snapshot_seqno": -1}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4977 keys, 13234394 bytes, temperature: kUnknown
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252194573, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 13234394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13199908, "index_size": 20936, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126492, "raw_average_key_size": 25, "raw_value_size": 13108356, "raw_average_value_size": 2633, "num_data_blocks": 858, "num_entries": 4977, "num_filter_entries": 4977, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.194920) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13234394 bytes
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.196618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.0 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.2 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(13.2) write-amplify(6.5) OK, records in: 5499, records dropped: 522 output_compression: NoCompression
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.196652) EVENT_LOG_v1 {"time_micros": 1770026252196637, "job": 18, "event": "compaction_finished", "compaction_time_micros": 107303, "compaction_time_cpu_micros": 27656, "output_level": 6, "num_output_files": 1, "total_output_size": 13234394, "num_input_records": 5499, "num_output_records": 4977, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252197330, "job": 18, "event": "table_file_deletion", "file_number": 40}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026252198945, "job": 18, "event": "table_file_deletion", "file_number": 38}
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.087191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.199120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.199131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.199135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.199148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-09:57:32.199152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 04:57:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:57:32 np0005604790 python3[250089]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 04:57:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:32 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20780021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Feb  2 04:57:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:34 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:34] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:57:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:34] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:57:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:36 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20780021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:57:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:37 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:37.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:37 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:38 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:39 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:39.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:39 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:40 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:41 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:41.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:41 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:41.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:42 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:43 np0005604790 podman[250104]: 2026-02-02 09:57:43.104090726 +0000 UTC m=+10.575094909 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83
Feb  2 04:57:43 np0005604790 podman[250202]: 2026-02-02 09:57:43.105660787 +0000 UTC m=+2.820714910 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 04:57:43 np0005604790 podman[250262]: 2026-02-02 09:57:43.29288561 +0000 UTC m=+0.070809378 container create b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb  2 04:57:43 np0005604790 podman[250262]: 2026-02-02 09:57:43.257399729 +0000 UTC m=+0.035323577 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83
Feb  2 04:57:43 np0005604790 python3[250089]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb  2 04:57:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:43 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:43.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:43 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:43.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:44 np0005604790 python3.9[250454]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:57:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:44 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:44] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:57:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:44] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Feb  2 04:57:45 np0005604790 python3.9[250608]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Feb  2 04:57:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:45.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:45 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:57:45.368 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 04:57:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:57:45.369 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 04:57:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 09:57:45.369 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 04:57:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:45 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:45.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 04:57:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4198 writes, 18K keys, 4198 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s#012Cumulative WAL: 4198 writes, 4198 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1481 writes, 6269 keys, 1481 commit groups, 1.0 writes per commit group, ingest: 11.05 MB, 0.02 MB/s#012Interval WAL: 1481 writes, 1481 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     60.2      0.52              0.07         9    0.058       0      0       0.0       0.0#012  L6      1/0   12.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    112.3     94.7      1.06              0.26         8    0.132     38K   4345       0.0       0.0#012 Sum      1/0   12.62 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     75.3     83.4      1.58              0.34        17    0.093     38K   4345       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6    105.2    103.5      0.59              0.15         8    0.074     21K   2564       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    112.3     94.7      1.06              0.26         8    0.132     38K   4345       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.8      0.51              0.07         8    0.064       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.12 GB read, 0.10 MB/s read, 1.6 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630b94e5350#2 capacity: 304.00 MB usage: 6.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.0001 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(365,6.25 MB,2.0566%) FilterBlock(18,117.23 KB,0.0376601%) IndexBlock(18,218.55 KB,0.0702055%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 04:57:46 np0005604790 python3.9[250762]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb  2 04:57:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:46 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:47.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:57:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:57:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 04:57:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 04:57:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:47 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f207c004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:47.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:47 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:47 np0005604790 python3[250914]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Feb  2 04:57:47 np0005604790 podman[250951]: 2026-02-02 09:57:47.671680161 +0000 UTC m=+0.059758055 container create 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 04:57:47 np0005604790 podman[250951]: 2026-02-02 09:57:47.645199469 +0000 UTC m=+0.033277443 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83
Feb  2 04:57:47 np0005604790 python3[250914]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83 kolla_start
Feb  2 04:57:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:47.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:48 np0005604790 podman[251116]: 2026-02-02 09:57:48.366447856 +0000 UTC m=+0.082728344 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 04:57:48 np0005604790 python3.9[251162]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:48 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20740035d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:57:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:57:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:49 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20980025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:49 np0005604790 python3.9[251320]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:57:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:50 np0005604790 python3.9[251473]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1770026269.5913372-3386-86283829859565/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb  2 04:57:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:50 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:50 np0005604790 python3.9[251549]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb  2 04:57:50 np0005604790 systemd[1]: Reloading.
Feb  2 04:57:50 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:57:50 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:57:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:51 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:51 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:51 np0005604790 python3.9[251659]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb  2 04:57:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:57:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:51.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:57:51 np0005604790 systemd[1]: Reloading.
Feb  2 04:57:51 np0005604790 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb  2 04:57:51 np0005604790 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb  2 04:57:52 np0005604790 systemd[1]: Starting nova_compute container...
Feb  2 04:57:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:52 np0005604790 podman[251703]: 2026-02-02 09:57:52.168612592 +0000 UTC m=+0.106281878 container init 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20260127)
Feb  2 04:57:52 np0005604790 podman[251703]: 2026-02-02 09:57:52.180554688 +0000 UTC m=+0.118223964 container start 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2)
Feb  2 04:57:52 np0005604790 podman[251703]: nova_compute
Feb  2 04:57:52 np0005604790 systemd[1]: Started nova_compute container.
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + sudo -E kolla_set_configs
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Validating config file
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying service configuration files
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 04:57:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Deleting /etc/ceph
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Creating directory /etc/ceph
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Writing out command to execute
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:52 np0005604790 nova_compute[251716]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 04:57:52 np0005604790 nova_compute[251716]: ++ cat /run_command
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + CMD=nova-compute
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + ARGS=
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + sudo kolla_copy_cacerts
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + [[ ! -n '' ]]
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + . kolla_extend_start
Feb  2 04:57:52 np0005604790 nova_compute[251716]: Running command: 'nova-compute'
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + umask 0022
Feb  2 04:57:52 np0005604790 nova_compute[251716]: + exec nova-compute
Feb  2 04:57:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:52 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098003150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:53 np0005604790 python3.9[251879]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:53 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:53 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:53.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:54 np0005604790 python3.9[252032]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 04:57:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:54 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.651 251722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.652 251722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.652 251722 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.652 251722 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.793 251722 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.810 251722 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 04:57:54 np0005604790 nova_compute[251716]: 2026-02-02 09:57:54.810 251722 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 04:57:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:09:57:54] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Feb  2 04:57:54 np0005604790 python3.9[252184]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb  2 04:57:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:55 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098003150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:55 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.467 251722 INFO nova.virt.driver [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.662 251722 INFO nova.compute.provider_config [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.675 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.676 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.676 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.676 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.676 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.677 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.677 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.677 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.677 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.677 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.678 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.679 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.679 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.679 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.679 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.679 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.680 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.680 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.680 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.680 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.680 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.681 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.681 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.681 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.681 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.681 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.682 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.682 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.682 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.682 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.683 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.683 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.683 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.683 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.683 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.684 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.684 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.684 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.684 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.684 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.685 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.685 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.685 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.685 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.685 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.686 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.687 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.688 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.689 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.690 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.690 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.690 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.690 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.690 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.691 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.692 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.693 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.693 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.693 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.693 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.693 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.694 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.694 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.694 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.694 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.694 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.695 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.696 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.697 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.698 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.698 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.698 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.698 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.698 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.699 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.700 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.701 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.701 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.702 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.702 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.702 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.702 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.702 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.703 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.703 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.703 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.703 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.703 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.704 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.705 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.705 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.705 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.705 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.705 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.706 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.706 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.706 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.706 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.706 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.707 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.707 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.707 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.707 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.707 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.708 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.708 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.708 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.708 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.708 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.709 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.709 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.709 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.709 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.709 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.710 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.710 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.710 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.710 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.710 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.711 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.712 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.712 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.712 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.712 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.712 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.713 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.714 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.715 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.715 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.715 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.715 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.715 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.716 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.717 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.718 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.719 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.720 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.721 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.722 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.723 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.724 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.725 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.726 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.727 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.728 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.729 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.730 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.731 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.732 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.733 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.734 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.735 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.736 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.737 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.737 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.737 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.737 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.737 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.738 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.739 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.740 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.741 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.742 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.743 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.744 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.745 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.746 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.747 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.748 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.748 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.748 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.748 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.748 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.749 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.749 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.749 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.749 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.749 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.750 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.750 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.750 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.750 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.750 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.751 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.752 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.752 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.752 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.752 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.752 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.753 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.753 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.753 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.753 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.753 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.754 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.754 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.754 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.754 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.754 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.755 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.756 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.756 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.756 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.756 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.756 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.757 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.758 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.759 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.760 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.760 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.760 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.760 251722 WARNING oslo_config.cfg [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 04:57:55 np0005604790 nova_compute[251716]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 04:57:55 np0005604790 nova_compute[251716]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 04:57:55 np0005604790 nova_compute[251716]: and ``live_migration_inbound_addr`` respectively.
Feb  2 04:57:55 np0005604790 nova_compute[251716]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.761 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.761 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.761 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.761 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.761 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.762 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.762 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.762 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.762 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.762 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.763 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.764 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.764 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.764 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rbd_secret_uuid        = d241d473-9fcb-5f74-b163-f1ca4454e7f1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.764 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.764 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.765 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.766 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.767 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.767 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.767 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.767 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.767 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.768 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.768 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.768 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.768 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.768 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.769 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.769 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.769 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.769 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.769 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.770 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.771 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.771 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.771 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.771 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.771 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.772 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.773 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.774 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.774 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.774 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.774 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.774 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.775 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.775 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.775 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.775 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.775 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.776 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.777 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.778 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.778 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.778 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.778 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.778 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.779 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.780 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.780 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.780 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.780 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.780 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.781 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.781 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.781 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.781 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.781 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.782 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.782 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.782 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.782 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.782 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.783 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.784 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.784 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.784 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.784 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.784 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.785 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.786 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.786 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.786 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.786 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.786 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.787 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.788 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.789 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.790 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.791 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.792 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.792 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.792 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.792 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.792 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.793 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.794 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.795 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.795 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.795 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.795 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.795 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.796 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.796 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.796 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.796 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.796 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.797 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.798 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.798 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.798 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.798 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.798 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.799 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.799 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.799 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.799 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.799 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.800 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.801 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.801 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.801 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.801 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.801 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.802 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.803 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.803 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.803 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.803 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.803 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.804 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.804 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.804 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.804 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.804 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.805 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.805 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.805 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.805 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.806 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.806 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.806 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.806 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.806 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.807 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.807 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.807 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.807 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.807 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.808 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.808 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.808 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.808 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.808 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.809 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.809 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.809 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.809 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.809 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.810 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.810 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.810 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.810 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.810 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.811 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.812 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.812 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.812 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.812 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.812 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.813 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.813 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.813 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.813 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.814 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.814 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.814 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.814 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.814 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.815 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.815 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.815 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.815 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.815 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.816 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.817 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.817 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.817 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.817 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.817 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.818 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.818 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.818 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.818 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.818 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.819 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.819 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.819 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.819 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.819 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.820 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.820 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.820 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.820 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.820 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.821 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.821 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.821 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.821 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.821 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.822 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.822 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.822 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.822 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.822 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.823 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.823 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.823 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.823 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.823 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.824 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.824 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.824 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.824 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.824 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.825 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.825 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.825 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.825 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.825 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.826 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.827 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.828 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.829 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.830 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.831 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.832 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.833 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.834 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.835 251722 DEBUG oslo_service.service [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.836 251722 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.853 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.853 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.854 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.854 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 04:57:55 np0005604790 systemd[1]: Starting libvirt QEMU daemon...
Feb  2 04:57:55 np0005604790 systemd[1]: Started libvirt QEMU daemon.
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.923 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa13bbdb8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.926 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa13bbdb8b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.927 251722 INFO nova.virt.libvirt.driver [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 04:57:55 np0005604790 python3.9[252339]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.956 251722 WARNING nova.virt.libvirt.driver [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 04:57:55 np0005604790 nova_compute[251716]: 2026-02-02 09:57:55.957 251722 DEBUG nova.virt.libvirt.volume.mount [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 04:57:56 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 04:57:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:56 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.823 251722 INFO nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Libvirt host capabilities <capabilities>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <host>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <uuid>ef282098-beab-4a99-a713-4af58aea9f62</uuid>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <cpu>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <arch>x86_64</arch>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model>EPYC-Rome-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <vendor>AMD</vendor>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <microcode version='16777317'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <signature family='23' model='49' stepping='0'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <maxphysaddr mode='emulate' bits='40'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='x2apic'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='tsc-deadline'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='osxsave'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='hypervisor'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='tsc_adjust'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='spec-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='stibp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='arch-capabilities'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='cmp_legacy'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='topoext'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='virt-ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='lbrv'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='tsc-scale'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='vmcb-clean'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='pause-filter'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='pfthreshold'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='svme-addr-chk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='rdctl-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='skip-l1dfl-vmentry'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='mds-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature name='pschange-mc-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <pages unit='KiB' size='4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <pages unit='KiB' size='2048'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <pages unit='KiB' size='1048576'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </cpu>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <power_management>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <suspend_mem/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </power_management>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <iommu support='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <migration_features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <live/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <uri_transports>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <uri_transport>tcp</uri_transport>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <uri_transport>rdma</uri_transport>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </uri_transports>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </migration_features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <topology>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <cells num='1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <cell id='0'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <memory unit='KiB'>7864292</memory>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <pages unit='KiB' size='4'>1966073</pages>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <pages unit='KiB' size='2048'>0</pages>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <pages unit='KiB' size='1048576'>0</pages>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <distances>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <sibling id='0' value='10'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          </distances>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          <cpus num='8'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:          </cpus>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        </cell>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </cells>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </topology>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <cache>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </cache>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <secmodel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model>selinux</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <doi>0</doi>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </secmodel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <secmodel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model>dac</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <doi>0</doi>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </secmodel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </host>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <guest>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <os_type>hvm</os_type>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <arch name='i686'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <wordsize>32</wordsize>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <domain type='qemu'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <domain type='kvm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </arch>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <pae/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <nonpae/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <acpi default='on' toggle='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <apic default='on' toggle='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <cpuselection/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <deviceboot/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <disksnapshot default='on' toggle='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <externalSnapshot/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </guest>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <guest>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <os_type>hvm</os_type>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <arch name='x86_64'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <wordsize>64</wordsize>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <domain type='qemu'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <domain type='kvm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </arch>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <acpi default='on' toggle='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <apic default='on' toggle='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <cpuselection/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <deviceboot/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <disksnapshot default='on' toggle='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <externalSnapshot/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </guest>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 
Feb  2 04:57:56 np0005604790 nova_compute[251716]: </capabilities>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: #033[00m
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.828 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.866 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb  2 04:57:56 np0005604790 nova_compute[251716]: <domainCapabilities>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <domain>kvm</domain>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <arch>i686</arch>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <vcpu max='240'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <iothreads supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <os supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <enum name='firmware'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <loader supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>rom</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pflash</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='readonly'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>yes</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='secure'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </loader>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </os>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <cpu>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='maximum' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='maximumMigratable'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='host-model' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <vendor>AMD</vendor>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='x2apic'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='stibp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='succor'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='lbrv'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='custom' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Dhyana-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SierraForest'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Snowridge'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='athlon'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='athlon-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='core2duo'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='core2duo-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='coreduo'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='coreduo-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='n270'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='n270-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='phenom'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='phenom-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </cpu>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <memoryBacking supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <enum name='sourceType'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <value>file</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <value>anonymous</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <value>memfd</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </memoryBacking>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <devices>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <disk supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='diskDevice'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>disk</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>cdrom</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>floppy</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>lun</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>ide</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>fdc</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>sata</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </disk>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <graphics supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vnc</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>egl-headless</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </graphics>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <video supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='modelType'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vga</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>cirrus</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>none</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>bochs</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>ramfb</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </video>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <hostdev supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='mode'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>subsystem</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='startupPolicy'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>mandatory</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>requisite</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>optional</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='subsysType'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pci</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='capsType'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='pciBackend'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </hostdev>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <rng supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>random</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>egd</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </rng>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <filesystem supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='driverType'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>path</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>handle</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>virtiofs</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </filesystem>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <tpm supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>tpm-tis</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>tpm-crb</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>emulator</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>external</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='backendVersion'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>2.0</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </tpm>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <redirdev supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </redirdev>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <channel supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </channel>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <crypto supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='model'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>qemu</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </crypto>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <interface supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='backendType'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>passt</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </interface>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <panic supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>isa</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>hyperv</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </panic>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <console supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>null</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vc</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>dev</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>file</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pipe</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>stdio</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>udp</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>tcp</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>qemu-vdagent</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </console>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </devices>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <gic supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <vmcoreinfo supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <genid supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <backingStoreInput supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <backup supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <async-teardown supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <s390-pv supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <ps2 supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <tdx supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <sev supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <sgx supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <hyperv supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='features'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>relaxed</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vapic</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>spinlocks</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vpindex</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>runtime</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>synic</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>stimer</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>reset</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>vendor_id</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>frequencies</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>reenlightenment</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>tlbflush</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>ipi</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>avic</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>emsr_bitmap</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>xmm_input</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <defaults>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <spinlocks>4095</spinlocks>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <stimer_direct>on</stimer_direct>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </defaults>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </hyperv>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <launchSecurity supported='no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </features>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: </domainCapabilities>
Feb  2 04:57:56 np0005604790 nova_compute[251716]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:57:56 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.880 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb  2 04:57:56 np0005604790 nova_compute[251716]: <domainCapabilities>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <domain>kvm</domain>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <arch>i686</arch>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <vcpu max='4096'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <iothreads supported='yes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <os supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <enum name='firmware'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <loader supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>rom</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>pflash</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='readonly'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>yes</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='secure'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </loader>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  </os>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:  <cpu>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='maximum' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <enum name='maximumMigratable'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='host-model' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <vendor>AMD</vendor>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='x2apic'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='stibp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='succor'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='lbrv'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:    <mode name='custom' supported='yes'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v3'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='Dhyana-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:56 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </cpu>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <memoryBacking supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <enum name='sourceType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>anonymous</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>memfd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </memoryBacking>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <disk supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='diskDevice'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>disk</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cdrom</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>floppy</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>lun</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>fdc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>sata</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </disk>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <graphics supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vnc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egl-headless</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </graphics>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <video supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='modelType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vga</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cirrus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>none</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>bochs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ramfb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </video>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hostdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='mode'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>subsystem</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='startupPolicy'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>mandatory</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>requisite</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>optional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='subsysType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pci</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='capsType'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='pciBackend'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hostdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <rng supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>random</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </rng>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <filesystem supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='driverType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>path</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>handle</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtiofs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </filesystem>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tpm supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-tis</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-crb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emulator</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>external</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendVersion'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>2.0</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </tpm>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <redirdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </redirdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <channel supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </channel>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <crypto supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </crypto>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <interface supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>passt</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </interface>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <panic supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>isa</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>hyperv</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </panic>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <console supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>null</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dev</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pipe</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stdio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>udp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tcp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu-vdagent</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </console>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <gic supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <vmcoreinfo supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <genid supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backingStoreInput supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backup supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <async-teardown supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <s390-pv supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <ps2 supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tdx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sev supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sgx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hyperv supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='features'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>relaxed</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vapic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>spinlocks</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vpindex</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>runtime</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>synic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stimer</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reset</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vendor_id</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>frequencies</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reenlightenment</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tlbflush</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ipi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>avic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emsr_bitmap</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>xmm_input</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <spinlocks>4095</spinlocks>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <stimer_direct>on</stimer_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hyperv>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <launchSecurity supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: </domainCapabilities>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.960 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:56.965 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb  2 04:57:57 np0005604790 nova_compute[251716]: <domainCapabilities>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <domain>kvm</domain>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <arch>x86_64</arch>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <vcpu max='240'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <iothreads supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <os supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <enum name='firmware'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <loader supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>rom</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pflash</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='readonly'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>yes</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='secure'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </loader>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </os>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <cpu>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='maximum' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='maximumMigratable'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='host-model' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <vendor>AMD</vendor>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='x2apic'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='stibp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='succor'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='lbrv'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='custom' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 python3.9[252578]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Dhyana-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:57:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:57.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T09:57:57.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 systemd[1]: Stopping nova_compute container...
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </cpu>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <memoryBacking supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <enum name='sourceType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>anonymous</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>memfd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </memoryBacking>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <disk supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='diskDevice'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>disk</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cdrom</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>floppy</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>lun</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ide</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>fdc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>sata</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </disk>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <graphics supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vnc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egl-headless</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </graphics>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <video supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='modelType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vga</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cirrus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>none</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>bochs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ramfb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </video>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hostdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='mode'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>subsystem</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='startupPolicy'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>mandatory</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>requisite</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>optional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='subsysType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pci</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='capsType'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='pciBackend'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hostdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <rng supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>random</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </rng>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <filesystem supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='driverType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>path</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>handle</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtiofs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </filesystem>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tpm supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-tis</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-crb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emulator</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>external</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendVersion'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>2.0</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </tpm>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <redirdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </redirdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <channel supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </channel>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <crypto supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </crypto>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <interface supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>passt</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </interface>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <panic supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>isa</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>hyperv</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </panic>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <console supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>null</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dev</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pipe</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stdio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>udp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tcp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu-vdagent</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </console>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <gic supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <vmcoreinfo supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <genid supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backingStoreInput supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backup supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <async-teardown supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <s390-pv supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <ps2 supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tdx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sev supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sgx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hyperv supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='features'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>relaxed</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vapic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>spinlocks</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vpindex</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>runtime</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>synic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stimer</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reset</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vendor_id</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>frequencies</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reenlightenment</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tlbflush</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ipi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>avic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emsr_bitmap</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>xmm_input</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <spinlocks>4095</spinlocks>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <stimer_direct>on</stimer_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hyperv>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <launchSecurity supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: </domainCapabilities>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.049 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb  2 04:57:57 np0005604790 nova_compute[251716]: <domainCapabilities>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <domain>kvm</domain>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <arch>x86_64</arch>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <vcpu max='4096'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <iothreads supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <os supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <enum name='firmware'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>efi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <loader supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>rom</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pflash</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='readonly'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>yes</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='secure'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>yes</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>no</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </loader>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </os>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <cpu>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='maximum' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='maximumMigratable'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>on</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>off</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='host-model' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <vendor>AMD</vendor>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='x2apic'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='stibp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='succor'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='lbrv'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <mode name='custom' supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Broadwell-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ddpd-u'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sha512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm3'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sm4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Cooperlake-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Denverton-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Dhyana-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amd-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='auto-ibrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibpb-brtype'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='no-nested-data-bp'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='null-sel-clr-base'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='perfmon-v2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbpb'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='stibp-always-on'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='EPYC-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-128'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-256'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx10-512'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='prefetchiti'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Haswell-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='IvyBridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='KnightsMill-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4fmaps'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-4vnniw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512er'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512pf'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fma4'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tbm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xop'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='amx-tile'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-bf16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-fp16'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bitalg'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vbmi2'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrc'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fzrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='la57'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='taa-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='tsx-ldtrk'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='SierraForest-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ifma'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-ne-convert'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx-vnni-int8'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bhi-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='bus-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cmpccxadd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fbsdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='fsrs'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ibrs-all'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='intel-psfd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ipred-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='lam'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mcdt-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pbrsb-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='psdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rrsba-ctrl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='serialize'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vaes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='vpclmulqdq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='hle'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='rtm'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512bw'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512cd'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512dq'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512f'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='avx512vl'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='invpcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pcid'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='pku'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='mpx'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v2'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v3'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='core-capability'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='split-lock-detect'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='Snowridge-v4'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='cldemote'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='erms'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='gfni'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdir64b'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='movdiri'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='xsaves'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='athlon-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='core2duo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='coreduo-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='n270-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='ss'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <blockers model='phenom-v1'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnow'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <feature name='3dnowext'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </blockers>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </mode>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </cpu>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <memoryBacking supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <enum name='sourceType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>anonymous</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <value>memfd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </memoryBacking>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <disk supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='diskDevice'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>disk</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cdrom</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>floppy</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>lun</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>fdc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>sata</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </disk>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <graphics supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vnc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egl-headless</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </graphics>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <video supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='modelType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vga</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>cirrus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>none</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>bochs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ramfb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </video>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hostdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='mode'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>subsystem</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='startupPolicy'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>mandatory</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>requisite</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>optional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='subsysType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pci</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>scsi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='capsType'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='pciBackend'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hostdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <rng supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtio-non-transitional</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>random</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>egd</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </rng>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <filesystem supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='driverType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>path</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>handle</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>virtiofs</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </filesystem>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tpm supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-tis</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tpm-crb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emulator</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>external</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendVersion'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>2.0</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </tpm>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <redirdev supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='bus'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>usb</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </redirdev>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <channel supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </channel>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <crypto supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendModel'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>builtin</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </crypto>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <interface supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='backendType'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>default</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>passt</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </interface>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <panic supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='model'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>isa</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>hyperv</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </panic>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <console supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='type'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>null</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vc</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pty</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dev</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>file</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>pipe</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stdio</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>udp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tcp</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>unix</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>qemu-vdagent</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>dbus</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </console>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </devices>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  <features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <gic supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <vmcoreinfo supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <genid supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backingStoreInput supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <backup supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <async-teardown supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <s390-pv supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <ps2 supported='yes'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <tdx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sev supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <sgx supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <hyperv supported='yes'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <enum name='features'>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>relaxed</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vapic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>spinlocks</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vpindex</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>runtime</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>synic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>stimer</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reset</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>vendor_id</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>frequencies</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>reenlightenment</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>tlbflush</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>ipi</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>avic</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>emsr_bitmap</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <value>xmm_input</value>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </enum>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      <defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <spinlocks>4095</spinlocks>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <stimer_direct>on</stimer_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:      </defaults>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    </hyperv>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:    <launchSecurity supported='no'/>
Feb  2 04:57:57 np0005604790 nova_compute[251716]:  </features>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: </domainCapabilities>
Feb  2 04:57:57 np0005604790 nova_compute[251716]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.159 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.160 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.160 251722 DEBUG nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.160 251722 INFO nova.virt.libvirt.host [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] Secure Boot support detected#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.163 251722 INFO nova.virt.libvirt.driver [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.164 251722 INFO nova.virt.libvirt.driver [None req-b613abe0-5893-4762-aca0-a82d1e952914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.173 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.173 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 04:57:57 np0005604790 nova_compute[251716]: 2026-02-02 09:57:57.174 251722 DEBUG oslo_concurrency.lockutils [None req-bdc547bf-d7c7-4bd4-aa35-ce189df30b13 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 04:57:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:57 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:57 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098003150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:57 np0005604790 virtqemud[252362]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Feb  2 04:57:57 np0005604790 virtqemud[252362]: hostname: compute-0
Feb  2 04:57:57 np0005604790 virtqemud[252362]: End of file while reading data: Input/output error
Feb  2 04:57:57 np0005604790 systemd[1]: libpod-46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3.scope: Deactivated successfully.
Feb  2 04:57:57 np0005604790 systemd[1]: libpod-46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3.scope: Consumed 3.315s CPU time.
Feb  2 04:57:57 np0005604790 podman[252611]: 2026-02-02 09:57:57.737764186 +0000 UTC m=+0.621523416 container died 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb  2 04:57:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3-userdata-shm.mount: Deactivated successfully.
Feb  2 04:57:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69-merged.mount: Deactivated successfully.
Feb  2 04:57:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:57:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:58 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:58 np0005604790 podman[252611]: 2026-02-02 09:57:58.808716223 +0000 UTC m=+1.692475463 container cleanup 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Feb  2 04:57:58 np0005604790 podman[252611]: nova_compute
Feb  2 04:57:58 np0005604790 podman[252644]: nova_compute
Feb  2 04:57:58 np0005604790 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb  2 04:57:58 np0005604790 systemd[1]: Stopped nova_compute container.
Feb  2 04:57:58 np0005604790 systemd[1]: Starting nova_compute container...
Feb  2 04:57:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:57:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccbbff02638865fc0d6f0d473aea6c5e5e0b4b6746c9741adf356a029a3ea69/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb  2 04:57:59 np0005604790 podman[252657]: 2026-02-02 09:57:59.082202552 +0000 UTC m=+0.176921061 container init 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb  2 04:57:59 np0005604790 podman[252657]: 2026-02-02 09:57:59.088284113 +0000 UTC m=+0.183002582 container start 46ea9bbd396be6c9cf42e800f6ecffbc7ecf6b0a7dd6f731e2744db95bbe8ea3 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + sudo -E kolla_set_configs
Feb  2 04:57:59 np0005604790 podman[252657]: nova_compute
Feb  2 04:57:59 np0005604790 systemd[1]: Started nova_compute container.
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Validating config file
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying service configuration files
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /etc/ceph
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Creating directory /etc/ceph
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/ceph
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Writing out command to execute
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:57:59 np0005604790 nova_compute[252672]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb  2 04:57:59 np0005604790 nova_compute[252672]: ++ cat /run_command
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + CMD=nova-compute
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + ARGS=
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + sudo kolla_copy_cacerts
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + [[ ! -n '' ]]
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + . kolla_extend_start
Feb  2 04:57:59 np0005604790 nova_compute[252672]: Running command: 'nova-compute'
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + echo 'Running command: '\''nova-compute'\'''
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + umask 0022
Feb  2 04:57:59 np0005604790 nova_compute[252672]: + exec nova-compute
Feb  2 04:57:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:57:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:57:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:59 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:57:59 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0002200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:57:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:57:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 04:57:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:57:59.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 04:58:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 04:58:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:58:00 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a8009330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:58:00 np0005604790 python3.9[252839]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb  2 04:58:00 np0005604790 systemd[1]: Started libpod-conmon-b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26.scope.
Feb  2 04:58:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 04:58:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be741ff950d4db5803a147cee27d848111930e48730ccd4df7cc5557a49f7854/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb  2 04:58:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be741ff950d4db5803a147cee27d848111930e48730ccd4df7cc5557a49f7854/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb  2 04:58:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be741ff950d4db5803a147cee27d848111930e48730ccd4df7cc5557a49f7854/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb  2 04:58:01 np0005604790 podman[252864]: 2026-02-02 09:58:01.012335565 +0000 UTC m=+0.139664023 container init b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 04:58:01 np0005604790 podman[252864]: 2026-02-02 09:58:01.019908596 +0000 UTC m=+0.147237014 container start b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Feb  2 04:58:01 np0005604790 python3.9[252839]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.069 252676 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.072 252676 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.072 252676 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.072 252676 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Applying nova statedir ownership
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb  2 04:58:01 np0005604790 nova_compute_init[252889]: INFO:nova_statedir:Nova statedir ownership complete
Feb  2 04:58:01 np0005604790 systemd[1]: libpod-b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26.scope: Deactivated successfully.
Feb  2 04:58:01 np0005604790 podman[252904]: 2026-02-02 09:58:01.15360302 +0000 UTC m=+0.035534503 container died b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm)
Feb  2 04:58:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26-userdata-shm.mount: Deactivated successfully.
Feb  2 04:58:01 np0005604790 systemd[1]: var-lib-containers-storage-overlay-be741ff950d4db5803a147cee27d848111930e48730ccd4df7cc5557a49f7854-merged.mount: Deactivated successfully.
Feb  2 04:58:01 np0005604790 podman[252904]: 2026-02-02 09:58:01.20414153 +0000 UTC m=+0.086072993 container cleanup b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb  2 04:58:01 np0005604790 systemd[1]: libpod-conmon-b5c2541b9463a2fa7ac5d2d1360f4b29001380a7fae5cc7984002bd2cc43ba26.scope: Deactivated successfully.
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.258 252676 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.273 252676 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.273 252676 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb  2 04:58:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:58:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 04:58:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:09:58:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 04:58:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:58:01 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:58:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 09:58:01 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2098003150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 04:58:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 04:58:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 04:58:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:09:58:01.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 04:58:01 np0005604790 systemd[1]: session-54.scope: Deactivated successfully.
Feb  2 04:58:01 np0005604790 systemd[1]: session-54.scope: Consumed 2min 78ms CPU time.
Feb  2 04:58:01 np0005604790 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.758 252676 INFO nova.virt.driver [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb  2 04:58:01 np0005604790 systemd-logind[793]: Removed session 54.
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.896 252676 INFO nova.compute.provider_config [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.918 252676 DEBUG oslo_concurrency.lockutils [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.919 252676 DEBUG oslo_concurrency.lockutils [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.919 252676 DEBUG oslo_concurrency.lockutils [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.919 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.920 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.920 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.920 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.920 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.920 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.921 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.921 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.921 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.921 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.921 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.922 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.922 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.922 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.922 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.922 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.923 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.923 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.923 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.923 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.923 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.924 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.924 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.924 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.924 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.924 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.925 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.925 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.925 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.925 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.925 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.926 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.926 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.926 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.926 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.927 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.927 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.927 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.927 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.927 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.928 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.928 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.928 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.928 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.929 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.929 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.929 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.929 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.929 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.930 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.930 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.930 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.930 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.931 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.931 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.931 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.931 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.931 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.932 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.933 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.933 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.933 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.933 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.933 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.934 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.934 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.934 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.934 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.934 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.935 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.935 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.935 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.935 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.935 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.936 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.936 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.936 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.936 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.936 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.937 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.937 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.937 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.937 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.937 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.938 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.938 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.938 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.938 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.938 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.939 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.940 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.940 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.940 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.940 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.940 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.941 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.941 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.941 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.941 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.941 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.942 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.942 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.942 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.942 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.942 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.943 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.943 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.943 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.944 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.945 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.945 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.945 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.945 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.945 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.946 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.946 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.946 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.946 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.946 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.947 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.947 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.947 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.947 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.947 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.948 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.948 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.948 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.948 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.948 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.949 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.949 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.949 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.949 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.949 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.950 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.951 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.952 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.953 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.954 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.955 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.956 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.957 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.958 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.959 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.960 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.961 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.962 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.963 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.964 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.965 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.966 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.967 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.968 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.969 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.970 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.971 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.972 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.973 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.974 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.975 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.976 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.977 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.978 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.979 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.980 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.981 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.982 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.983 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.984 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.985 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.986 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.987 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.988 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.989 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.990 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.991 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.992 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.993 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.994 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.995 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 WARNING oslo_config.cfg [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb  2 04:58:01 np0005604790 nova_compute[252672]: live_migration_uri is deprecated for removal in favor of two other options that
Feb  2 04:58:01 np0005604790 nova_compute[252672]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb  2 04:58:01 np0005604790 nova_compute[252672]: and ``live_migration_inbound_addr`` respectively.
Feb  2 04:58:01 np0005604790 nova_compute[252672]: ).  Its value may be silently ignored in the future.#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.996 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.997 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.998 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rbd_secret_uuid        = d241d473-9fcb-5f74-b163-f1ca4454e7f1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:01 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:01.999 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.000 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.001 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.002 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.003 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.004 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.005 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.006 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.007 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.008 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.009 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.010 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.011 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.012 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.013 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.014 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.014 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.014 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.014 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.014 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.015 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.016 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.017 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.018 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.019 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.020 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.021 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.022 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.023 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.024 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.025 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.026 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.027 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.028 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.029 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.030 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.031 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.032 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.033 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.034 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.035 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.036 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.037 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.038 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.039 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.040 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.041 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.042 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.043 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.044 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.045 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.046 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.047 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.048 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.049 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.050 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.051 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.052 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.053 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.054 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.055 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.055 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.055 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.055 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.055 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.056 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.057 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.057 252676 DEBUG oslo_service.service [None req-6c9d1541-5445-429b-a126-54c7e449f8a0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.058 252676 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Feb  2 04:58:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.073 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.074 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.074 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.074 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.089 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0a372570a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.091 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0a372570a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.092 252676 INFO nova.virt.libvirt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Connection event '1' reason 'None'#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.096 252676 INFO nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Libvirt host capabilities <capabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <host>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <uuid>ef282098-beab-4a99-a713-4af58aea9f62</uuid>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <arch>x86_64</arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model>EPYC-Rome-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <vendor>AMD</vendor>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <microcode version='16777317'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <signature family='23' model='49' stepping='0'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <maxphysaddr mode='emulate' bits='40'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='x2apic'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='tsc-deadline'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='osxsave'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='hypervisor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='tsc_adjust'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='spec-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='stibp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='arch-capabilities'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='cmp_legacy'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='topoext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='virt-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='lbrv'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='tsc-scale'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='vmcb-clean'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='pause-filter'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='pfthreshold'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='svme-addr-chk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='rdctl-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='skip-l1dfl-vmentry'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='mds-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature name='pschange-mc-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <pages unit='KiB' size='4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <pages unit='KiB' size='2048'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <pages unit='KiB' size='1048576'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <power_management>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <suspend_mem/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </power_management>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <iommu support='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <migration_features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <live/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <uri_transports>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <uri_transport>tcp</uri_transport>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <uri_transport>rdma</uri_transport>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </uri_transports>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </migration_features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <topology>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <cells num='1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <cell id='0'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <memory unit='KiB'>7864292</memory>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <pages unit='KiB' size='4'>1966073</pages>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <pages unit='KiB' size='2048'>0</pages>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <pages unit='KiB' size='1048576'>0</pages>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <distances>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <sibling id='0' value='10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          </distances>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          <cpus num='8'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:          </cpus>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        </cell>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </cells>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </topology>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <cache>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </cache>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <secmodel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model>selinux</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <doi>0</doi>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </secmodel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <secmodel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model>dac</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <doi>0</doi>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </secmodel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </host>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <guest>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <os_type>hvm</os_type>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <arch name='i686'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <wordsize>32</wordsize>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <domain type='qemu'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <domain type='kvm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <pae/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <nonpae/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <acpi default='on' toggle='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <apic default='on' toggle='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <cpuselection/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <deviceboot/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <disksnapshot default='on' toggle='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <externalSnapshot/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </guest>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <guest>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <os_type>hvm</os_type>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <arch name='x86_64'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <wordsize>64</wordsize>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <domain type='qemu'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <domain type='kvm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <acpi default='on' toggle='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <apic default='on' toggle='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <cpuselection/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <deviceboot/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <disksnapshot default='on' toggle='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <externalSnapshot/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </guest>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 
Feb  2 04:58:02 np0005604790 nova_compute[252672]: </capabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: #033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.102 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.103 252676 WARNING nova.virt.libvirt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.103 252676 DEBUG nova.virt.libvirt.volume.mount [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.108 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb  2 04:58:02 np0005604790 nova_compute[252672]: <domainCapabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <domain>kvm</domain>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <arch>i686</arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <vcpu max='240'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <iothreads supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <os supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <enum name='firmware'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <loader supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>rom</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pflash</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='readonly'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>yes</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='secure'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </loader>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </os>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='maximum' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='maximumMigratable'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-model' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <vendor>AMD</vendor>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='x2apic'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='stibp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='succor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lbrv'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='custom' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Dhyana-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='athlon'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='athlon-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='core2duo'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='core2duo-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='coreduo'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='coreduo-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='n270'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='n270-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='phenom'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='phenom-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <memoryBacking supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <enum name='sourceType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>file</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>anonymous</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>memfd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </memoryBacking>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <devices>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <disk supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='diskDevice'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>disk</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>cdrom</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>floppy</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>lun</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='bus'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>ide</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>fdc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>scsi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>sata</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-non-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </disk>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <graphics supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vnc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>egl-headless</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dbus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </graphics>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <video supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='modelType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vga</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>cirrus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>none</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>bochs</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>ramfb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </video>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <hostdev supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='mode'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>subsystem</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='startupPolicy'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>default</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>mandatory</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>requisite</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>optional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='subsysType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pci</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>scsi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='capsType'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='pciBackend'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </hostdev>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <rng supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-non-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>random</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>egd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>builtin</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </rng>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <filesystem supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='driverType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>path</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>handle</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtiofs</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </filesystem>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <tpm supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tpm-tis</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tpm-crb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>emulator</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>external</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendVersion'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>2.0</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </tpm>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <redirdev supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='bus'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </redirdev>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <channel supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pty</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>unix</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </channel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <crypto supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>qemu</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>builtin</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </crypto>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <interface supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>default</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>passt</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </interface>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <panic supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>isa</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>hyperv</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </panic>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <console supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>null</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pty</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dev</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>file</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pipe</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>stdio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>udp</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tcp</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>unix</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>qemu-vdagent</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dbus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </console>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </devices>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <gic supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <vmcoreinfo supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <genid supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <backingStoreInput supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <backup supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <async-teardown supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <s390-pv supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <ps2 supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <tdx supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <sev supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <sgx supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <hyperv supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='features'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>relaxed</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vapic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>spinlocks</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vpindex</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>runtime</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>synic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>stimer</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>reset</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vendor_id</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>frequencies</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>reenlightenment</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tlbflush</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>ipi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>avic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>emsr_bitmap</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>xmm_input</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <defaults>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <spinlocks>4095</spinlocks>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <stimer_direct>on</stimer_direct>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </defaults>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </hyperv>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <launchSecurity supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: </domainCapabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.113 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb  2 04:58:02 np0005604790 nova_compute[252672]: <domainCapabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <domain>kvm</domain>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <machine>pc-q35-rhel9.8.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <arch>i686</arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <vcpu max='4096'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <iothreads supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <os supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <enum name='firmware'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <loader supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>rom</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pflash</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='readonly'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>yes</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='secure'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </loader>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </os>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='maximum' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='maximumMigratable'>
Feb  2 04:58:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 04:58:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-model' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <vendor>AMD</vendor>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='x2apic'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='stibp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='succor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lbrv'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='custom' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Dhyana-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SierraForest-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Client-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Skylake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='core-capability'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='split-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Snowridge-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='athlon'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='athlon-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='core2duo'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='core2duo-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='coreduo'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='coreduo-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='n270'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='n270-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='phenom'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='phenom-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnow'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='3dnowext'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <memoryBacking supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <enum name='sourceType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>file</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>anonymous</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>memfd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </memoryBacking>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <devices>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <disk supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='diskDevice'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>disk</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>cdrom</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>floppy</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>lun</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='bus'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>fdc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>scsi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>sata</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-non-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </disk>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <graphics supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vnc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>egl-headless</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dbus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </graphics>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <video supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='modelType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vga</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>cirrus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>none</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>bochs</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>ramfb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </video>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <hostdev supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='mode'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>subsystem</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='startupPolicy'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>default</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>mandatory</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>requisite</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>optional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='subsysType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pci</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>scsi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='capsType'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='pciBackend'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </hostdev>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <rng supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtio-non-transitional</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>random</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>egd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>builtin</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </rng>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <filesystem supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='driverType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>path</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>handle</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>virtiofs</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </filesystem>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <tpm supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tpm-tis</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tpm-crb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>emulator</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>external</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendVersion'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>2.0</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </tpm>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <redirdev supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='bus'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>usb</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </redirdev>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <channel supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pty</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>unix</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </channel>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <crypto supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>qemu</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendModel'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>builtin</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </crypto>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <interface supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='backendType'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>default</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>passt</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </interface>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <panic supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='model'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>isa</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>hyperv</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </panic>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <console supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>null</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vc</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pty</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dev</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>file</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pipe</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>stdio</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>udp</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tcp</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>unix</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>qemu-vdagent</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>dbus</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </console>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </devices>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <gic supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <vmcoreinfo supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <genid supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <backingStoreInput supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <backup supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <async-teardown supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <s390-pv supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <ps2 supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <tdx supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <sev supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <sgx supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <hyperv supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='features'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>relaxed</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vapic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>spinlocks</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vpindex</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>runtime</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>synic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>stimer</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>reset</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>vendor_id</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>frequencies</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>reenlightenment</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>tlbflush</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>ipi</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>avic</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>emsr_bitmap</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>xmm_input</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <defaults>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <spinlocks>4095</spinlocks>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <stimer_direct>on</stimer_direct>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <tlbflush_direct>on</tlbflush_direct>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <tlbflush_extended>on</tlbflush_extended>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </defaults>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </hyperv>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <launchSecurity supported='no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </features>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: </domainCapabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.174 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb  2 04:58:02 np0005604790 nova_compute[252672]: 2026-02-02 09:58:02.179 252676 DEBUG nova.virt.libvirt.host [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb  2 04:58:02 np0005604790 nova_compute[252672]: <domainCapabilities>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <path>/usr/libexec/qemu-kvm</path>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <domain>kvm</domain>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <arch>x86_64</arch>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <vcpu max='240'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <iothreads supported='yes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <os supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <enum name='firmware'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <loader supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='type'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>rom</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>pflash</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='readonly'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>yes</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='secure'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>no</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </loader>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  </os>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:  <cpu>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-passthrough' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='hostPassthroughMigratable'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='maximum' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <enum name='maximumMigratable'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>on</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <value>off</value>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </enum>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='host-model' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model fallback='forbid'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <vendor>AMD</vendor>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='x2apic'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-deadline'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='hypervisor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc_adjust'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='spec-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='stibp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='cmp_legacy'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='overflow-recov'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='succor'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='amd-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='virt-ssbd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lbrv'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='tsc-scale'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='vmcb-clean'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='flushbyasid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pause-filter'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='pfthreshold'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='svme-addr-chk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='require' name='lfence-always-serializing'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <feature policy='disable' name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    </mode>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:    <mode name='custom' supported='yes'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Broadwell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cascadelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='ClearwaterForest-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-ne-convert'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bhi-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cmpccxadd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ddpd-u'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='intel-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ipred-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='lam'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rrsba-ctrl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sha512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm3'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sm4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Cooperlake-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mpx'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Denverton-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Dhyana-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Genoa-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Milan-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Rome-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-Turin-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amd-psfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='auto-ibrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vp2intersect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fs-gs-base-ns'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibpb-brtype'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='no-nested-data-bp'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='null-sel-clr-base'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='perfmon-v2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbpb'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='srso-user-kernel-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='stibp-always-on'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='EPYC-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='GraniteRapids-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-128'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-256'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx10-512'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='cldemote'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fbsdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='mcdt-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdir64b'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='movdiri'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pbrsb-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='prefetchiti'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='psdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='sbdr-ssdp-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-noTSX-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Haswell-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-noTSX'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v3'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v6'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Icelake-Server-v7'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-IBRS'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='IvyBridge-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='KnightsMill-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4fmaps'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-4vnniw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512er'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512pf'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ss'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G4-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='Opteron_G5-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fma4'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tbm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xop'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v1'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrc'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fsrs'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='fzrm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='gfni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='hle'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='ibrs-all'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='invpcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='la57'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pcid'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='pku'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='rtm'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='serialize'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='taa-no'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='tsx-ldtrk'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vaes'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='vpclmulqdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xfd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='xsaves'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      </blockers>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:      <blockers model='SapphireRapids-v2'>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-int8'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='amx-tile'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx-vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-bf16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-fp16'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512-vpopcntdq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bitalg'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512bw'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512cd'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512dq'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512f'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512ifma'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vbmi2'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vl'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='avx512vnni'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='bus-lock-detect'/>
Feb  2 04:58:02 np0005604790 nova_compute[252672]:        <feature name='erms'/>
Feb  2 05:03:29 np0005604790 nova_compute[252672]: 2026-02-02 10:03:29.548 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:29 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:30 np0005604790 rsyslogd[1005]: imjournal: 7085 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb  2 05:03:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:03:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:30.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:03:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb  2 05:03:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:30 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800b0d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:31 np0005604790 nova_compute[252672]: 2026-02-02 10:03:31.599 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:31 np0005604790 nova_compute[252672]: 2026-02-02 10:03:31.611 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:31 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:31.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:03:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:03:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb  2 05:03:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:32 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800b0d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:33 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f209c0047b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:33.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:34.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Feb  2 05:03:34 np0005604790 nova_compute[252672]: 2026-02-02 10:03:34.589 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:34 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2074004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:34] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:34] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a0004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:35 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2078002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:03:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 69 op/s
Feb  2 05:03:36 np0005604790 nova_compute[252672]: 2026-02-02 10:03:36.612 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:36 np0005604790 kernel: ganesha.nfsd[257021]: segfault at 50 ip 00007f212ace632e sp 00007f20b2ffc210 error 4 in libntirpc.so.5.8[7f212accb000+2c000] likely on CPU 3 (core 0, socket 3)
Feb  2 05:03:36 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 05:03:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[234598]: 02/02/2026 10:03:36 : epoch 698074b8 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f20a800b0d0 fd 39 proxy ignored for local
Feb  2 05:03:36 np0005604790 systemd[1]: Started Process Core Dump (PID 258625/UID 0).
Feb  2 05:03:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:03:37.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:03:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:37.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Feb  2 05:03:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100338 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:03:39 np0005604790 systemd-coredump[258626]: Process 234639 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 75:#012#0  0x00007f212ace632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 05:03:39 np0005604790 systemd[1]: systemd-coredump@9-258625-0.service: Deactivated successfully.
Feb  2 05:03:39 np0005604790 podman[258658]: 2026-02-02 10:03:39.300286546 +0000 UTC m=+0.027680764 container died 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:03:39 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5fca62032290309c8223854e77c9a339381b88ee9901a6a74fc3b5ad9f2bcb2a-merged.mount: Deactivated successfully.
Feb  2 05:03:39 np0005604790 podman[258658]: 2026-02-02 10:03:39.349931899 +0000 UTC m=+0.077326127 container remove 587c17f4f1c5b287f0ff9440e588fe352fd975d6a0d71a6cc630ef29690c2453 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:03:39 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 05:03:39 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 05:03:39 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.994s CPU time.
Feb  2 05:03:39 np0005604790 nova_compute[252672]: 2026-02-02 10:03:39.629 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:39.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:03:41 np0005604790 nova_compute[252672]: 2026-02-02 10:03:41.615 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:41.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:03:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:43.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:03:44 np0005604790 nova_compute[252672]: 2026-02-02 10:03:44.631 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100344 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:03:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:44] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:44] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:45.375 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:45.376 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:45.377 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:45.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:46.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 200 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:03:46 np0005604790 nova_compute[252672]: 2026-02-02 10:03:46.617 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:03:47.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:03:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:03:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:03:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:03:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:03:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:47.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:03:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:48.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:03:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Feb  2 05:03:49 np0005604790 nova_compute[252672]: 2026-02-02 10:03:49.632 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:49 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 10.
Feb  2 05:03:49 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:03:49 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.994s CPU time.
Feb  2 05:03:49 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 05:03:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:49 np0005604790 podman[258768]: 2026-02-02 10:03:49.984187112 +0000 UTC m=+0.059840078 container create 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:03:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ec35c1dc6a6ee8e6f1a26dcba05125a53afd12fe829746c09a5c6e5a4157fb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 05:03:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ec35c1dc6a6ee8e6f1a26dcba05125a53afd12fe829746c09a5c6e5a4157fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:03:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ec35c1dc6a6ee8e6f1a26dcba05125a53afd12fe829746c09a5c6e5a4157fb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:03:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ec35c1dc6a6ee8e6f1a26dcba05125a53afd12fe829746c09a5c6e5a4157fb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:03:50 np0005604790 podman[258768]: 2026-02-02 10:03:49.957866446 +0000 UTC m=+0.033519422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:03:50 np0005604790 podman[258768]: 2026-02-02 10:03:50.063298627 +0000 UTC m=+0.138951633 container init 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 05:03:50 np0005604790 podman[258768]: 2026-02-02 10:03:50.079541943 +0000 UTC m=+0.155194919 container start 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:03:50 np0005604790 bash[258768]: 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60
Feb  2 05:03:50 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 05:03:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:50 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:03:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Feb  2 05:03:50 np0005604790 ovn_controller[154631]: 2026-02-02T10:03:50Z|00034|binding|INFO|Releasing lport 55246443-5941-490c-8eaa-13ee90fff1fa from this chassis (sb_readonly=0)
Feb  2 05:03:50 np0005604790 nova_compute[252672]: 2026-02-02 10:03:50.968 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.618 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.695 252676 DEBUG nova.compute.manager [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received event network-changed-d75e8807-c1e9-4436-a9cd-81e1fa00d62f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.695 252676 DEBUG nova.compute.manager [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Refreshing instance network info cache due to event network-changed-d75e8807-c1e9-4436-a9cd-81e1fa00d62f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.696 252676 DEBUG oslo_concurrency.lockutils [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.696 252676 DEBUG oslo_concurrency.lockutils [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.696 252676 DEBUG nova.network.neutron [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Refreshing network info cache for port d75e8807-c1e9-4436-a9cd-81e1fa00d62f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.807 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.808 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.809 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.810 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.810 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.813 252676 INFO nova.compute.manager [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Terminating instance#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.815 252676 DEBUG nova.compute.manager [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 05:03:51 np0005604790 kernel: tapd75e8807-c1 (unregistering): left promiscuous mode
Feb  2 05:03:51 np0005604790 NetworkManager[49024]: <info>  [1770026631.8774] device (tapd75e8807-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:03:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:03:51Z|00035|binding|INFO|Releasing lport d75e8807-c1e9-4436-a9cd-81e1fa00d62f from this chassis (sb_readonly=0)
Feb  2 05:03:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:03:51Z|00036|binding|INFO|Setting lport d75e8807-c1e9-4436-a9cd-81e1fa00d62f down in Southbound
Feb  2 05:03:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:03:51Z|00037|binding|INFO|Removing iface tapd75e8807-c1 ovn-installed in OVS
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.886 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:51.899 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:39:7e 10.100.0.5'], port_security=['fa:16:3e:9d:39:7e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb8bd8fb-92fd-4de7-b952-292440020c50', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baf7632c-7d82-457e-9682-75b5e4b39eef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9141eba-ba20-424e-a668-681533049857, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=d75e8807-c1e9-4436-a9cd-81e1fa00d62f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:03:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:51.901 165364 INFO neutron.agent.ovn.metadata.agent [-] Port d75e8807-c1e9-4436-a9cd-81e1fa00d62f in datapath eb8bd8fb-92fd-4de7-b952-292440020c50 unbound from our chassis#033[00m
Feb  2 05:03:51 np0005604790 nova_compute[252672]: 2026-02-02 10:03:51.902 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:51.905 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb8bd8fb-92fd-4de7-b952-292440020c50, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:03:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:51.907 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[0364639a-b936-44b8-b6f3-a60bf6c5e523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:51.908 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50 namespace which is not needed anymore#033[00m
Feb  2 05:03:51 np0005604790 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb  2 05:03:51 np0005604790 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.841s CPU time.
Feb  2 05:03:51 np0005604790 systemd-machined[219024]: Machine qemu-1-instance-00000001 terminated.
Feb  2 05:03:52 np0005604790 podman[258828]: 2026-02-02 10:03:52.001452615 +0000 UTC m=+0.103120570 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.042 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.047 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.058 252676 INFO nova.virt.libvirt.driver [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Instance destroyed successfully.#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.059 252676 DEBUG nova.objects.instance [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'resources' on Instance uuid e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [NOTICE]   (257670) : haproxy version is 2.8.14-c23fe91
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [NOTICE]   (257670) : path to executable is /usr/sbin/haproxy
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [WARNING]  (257670) : Exiting Master process...
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [WARNING]  (257670) : Exiting Master process...
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [ALERT]    (257670) : Current worker (257672) exited with code 143 (Terminated)
Feb  2 05:03:52 np0005604790 neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50[257666]: [WARNING]  (257670) : All workers exited. Exiting... (0)
Feb  2 05:03:52 np0005604790 systemd[1]: libpod-1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7.scope: Deactivated successfully.
Feb  2 05:03:52 np0005604790 podman[258876]: 2026-02-02 10:03:52.071583448 +0000 UTC m=+0.057988708 container died 1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.074 252676 DEBUG nova.virt.libvirt.vif [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:02:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-191604576',display_name='tempest-TestNetworkBasicOps-server-191604576',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-191604576',id=1,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOSHONiU0WnnLshqVd1wONqJYYI51OKun3wrB9aSveRrz1nNE3R8THWmAIkbGNyVJ/TbeNmQXuiltG2m8+Vfm6PXPo8aHyJNbEYEAbv5cxtQn1lPOXMM6EKXYHc3DrQf2g==',key_name='tempest-TestNetworkBasicOps-1299746369',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-06lz8kq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:02:39Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "address": "fa:16:3e:9d:39:7e", "network": {"id": "eb8bd8fb-92fd-4de7-b952-292440020c50", "bridge": "br-int", "label": "tempest-network-smoke--1133993584", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd75e8807-c1", "ovs_interfaceid": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.074 252676 DEBUG nova.network.os_vif_util [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "address": "fa:16:3e:9d:39:7e", "network": {"id": "eb8bd8fb-92fd-4de7-b952-292440020c50", "bridge": "br-int", "label": "tempest-network-smoke--1133993584", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd75e8807-c1", "ovs_interfaceid": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.075 252676 DEBUG nova.network.os_vif_util [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:39:7e,bridge_name='br-int',has_traffic_filtering=True,id=d75e8807-c1e9-4436-a9cd-81e1fa00d62f,network=Network(eb8bd8fb-92fd-4de7-b952-292440020c50),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd75e8807-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.075 252676 DEBUG os_vif [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:39:7e,bridge_name='br-int',has_traffic_filtering=True,id=d75e8807-c1e9-4436-a9cd-81e1fa00d62f,network=Network(eb8bd8fb-92fd-4de7-b952-292440020c50),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd75e8807-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.078 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.078 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd75e8807-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.080 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.083 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.085 252676 INFO os_vif [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:39:7e,bridge_name='br-int',has_traffic_filtering=True,id=d75e8807-c1e9-4436-a9cd-81e1fa00d62f,network=Network(eb8bd8fb-92fd-4de7-b952-292440020c50),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd75e8807-c1')#033[00m
Feb  2 05:03:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7-userdata-shm.mount: Deactivated successfully.
Feb  2 05:03:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-78b3697034207283c923eb701954d7c0ed5190ac0c45401b352f295dd009fce1-merged.mount: Deactivated successfully.
Feb  2 05:03:52 np0005604790 podman[258876]: 2026-02-02 10:03:52.11930915 +0000 UTC m=+0.105714380 container cleanup 1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:03:52 np0005604790 systemd[1]: libpod-conmon-1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7.scope: Deactivated successfully.
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.176 252676 DEBUG nova.compute.manager [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received event network-vif-unplugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.177 252676 DEBUG oslo_concurrency.lockutils [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.177 252676 DEBUG oslo_concurrency.lockutils [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.177 252676 DEBUG oslo_concurrency.lockutils [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.178 252676 DEBUG nova.compute.manager [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] No waiting events found dispatching network-vif-unplugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.178 252676 DEBUG nova.compute.manager [req-ad3003bf-4769-471d-89d8-31c31f2224da req-b7864e90-a920-4418-b751-43ff20f3f712 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received event network-vif-unplugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 05:03:52 np0005604790 podman[258935]: 2026-02-02 10:03:52.191419806 +0000 UTC m=+0.049892080 container remove 1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.195 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[074c6dd2-78fb-49a8-978c-3adb46a817ff]: (4, ('Mon Feb  2 10:03:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50 (1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7)\n1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7\nMon Feb  2 10:03:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50 (1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7)\n1c02694760691d48aac8b6b1e4787c5c9caeb146b35bc9cc9160accf74fc73a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.198 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e2efde40-abfa-452a-9b3c-38f69f62d43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.199 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb8bd8fb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:03:52 np0005604790 kernel: tapeb8bd8fb-90: left promiscuous mode
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.202 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.207 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.210 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[87e1a0dc-b3ca-4e41-bcf3-1112bbca4820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.229 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[78d1c6ae-31ec-46d1-a37e-1b5d126c07d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.230 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf9f0d5-ce57-4aa2-ac4e-59c4d83b423d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.246 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[ac285189-ed3a-45bf-a46a-566412d9e948]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 372122, 'reachable_time': 29108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258953, 'error': None, 'target': 'ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.253 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb8bd8fb-92fd-4de7-b952-292440020c50 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:03:52 np0005604790 systemd[1]: run-netns-ovnmeta\x2deb8bd8fb\x2d92fd\x2d4de7\x2db952\x2d292440020c50.mount: Deactivated successfully.
Feb  2 05:03:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:03:52.254 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[75a87b09-7329-4f50-b429-a15f444262a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:03:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 29 op/s
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.665 252676 INFO nova.virt.libvirt.driver [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Deleting instance files /var/lib/nova/instances/e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc_del#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.667 252676 INFO nova.virt.libvirt.driver [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Deletion of /var/lib/nova/instances/e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc_del complete#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.746 252676 DEBUG nova.virt.libvirt.host [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.747 252676 INFO nova.virt.libvirt.host [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] UEFI support detected#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.750 252676 INFO nova.compute.manager [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.751 252676 DEBUG oslo.service.loopingcall [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.752 252676 DEBUG nova.compute.manager [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.752 252676 DEBUG nova.network.neutron [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.997 252676 DEBUG nova.network.neutron [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Updated VIF entry in instance network info cache for port d75e8807-c1e9-4436-a9cd-81e1fa00d62f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:03:52 np0005604790 nova_compute[252672]: 2026-02-02 10:03:52.999 252676 DEBUG nova.network.neutron [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Updating instance_info_cache with network_info: [{"id": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "address": "fa:16:3e:9d:39:7e", "network": {"id": "eb8bd8fb-92fd-4de7-b952-292440020c50", "bridge": "br-int", "label": "tempest-network-smoke--1133993584", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd75e8807-c1", "ovs_interfaceid": "d75e8807-c1e9-4436-a9cd-81e1fa00d62f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.019 252676 DEBUG oslo_concurrency.lockutils [req-e55aa974-2c31-42bc-8900-e10fd5a88729 req-c5e3a78f-2d43-49c9-92da-f89623075986 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.303 252676 DEBUG nova.network.neutron [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.326 252676 INFO nova.compute.manager [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Took 0.57 seconds to deallocate network for instance.#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.367 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.368 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.414 252676 DEBUG oslo_concurrency.processutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:03:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.897 252676 DEBUG oslo_concurrency.processutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.903 252676 DEBUG nova.compute.provider_tree [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.919 252676 DEBUG nova.scheduler.client.report [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.947 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:53 np0005604790 nova_compute[252672]: 2026-02-02 10:03:53.985 252676 INFO nova.scheduler.client.report [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Deleted allocations for instance e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.047 252676 DEBUG oslo_concurrency.lockutils [None req-9c47ea61-c65d-4804-8f2f-c08b2a1b0e82 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:54.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.408 252676 DEBUG nova.compute.manager [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received event network-vif-plugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.409 252676 DEBUG oslo_concurrency.lockutils [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.410 252676 DEBUG oslo_concurrency.lockutils [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.410 252676 DEBUG oslo_concurrency.lockutils [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.411 252676 DEBUG nova.compute.manager [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] No waiting events found dispatching network-vif-plugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.411 252676 WARNING nova.compute.manager [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received unexpected event network-vif-plugged-d75e8807-c1e9-4436-a9cd-81e1fa00d62f for instance with vm_state deleted and task_state None.#033[00m
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.412 252676 DEBUG nova.compute.manager [req-54b59b03-0efb-4ade-a3ad-524023d643d1 req-10a9b546-0620-4491-a7c4-007dd60da15a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Received event network-vif-deleted-d75e8807-c1e9-4436-a9cd-81e1fa00d62f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:03:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 18 KiB/s wr, 60 op/s
Feb  2 05:03:54 np0005604790 nova_compute[252672]: 2026-02-02 10:03:54.636 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:54] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:03:54] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:03:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:03:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:03:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:03:56 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:03:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 244 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.3 KiB/s wr, 59 op/s
Feb  2 05:03:57 np0005604790 nova_compute[252672]: 2026-02-02 10:03:57.102 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:03:57.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:03:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:03:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:03:58 np0005604790 nova_compute[252672]: 2026-02-02 10:03:58.048 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:58 np0005604790 nova_compute[252672]: 2026-02-02 10:03:58.095 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:03:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:03:58.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:03:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.8 KiB/s wr, 61 op/s
Feb  2 05:03:58 np0005604790 podman[259010]: 2026-02-02 10:03:58.663404521 +0000 UTC m=+0.067051942 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 05:03:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100358 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:03:59 np0005604790 nova_compute[252672]: 2026-02-02 10:03:59.682 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:03:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:03:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:03:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:03:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Feb  2 05:04:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:02 np0005604790 nova_compute[252672]: 2026-02-02 10:04:02.105 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:04:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:04:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:02.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.280915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642281033, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1020, "num_deletes": 250, "total_data_size": 1656756, "memory_usage": 1682632, "flush_reason": "Manual Compaction"}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642290577, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1022451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22154, "largest_seqno": 23173, "table_properties": {"data_size": 1018432, "index_size": 1607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10580, "raw_average_key_size": 20, "raw_value_size": 1009718, "raw_average_value_size": 1949, "num_data_blocks": 71, "num_entries": 518, "num_filter_entries": 518, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026553, "oldest_key_time": 1770026553, "file_creation_time": 1770026642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9744 microseconds, and 4717 cpu microseconds.
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.290648) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1022451 bytes OK
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.290682) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.296972) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.297000) EVENT_LOG_v1 {"time_micros": 1770026642296992, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.297035) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1652066, prev total WAL file size 1652066, number of live WAL files 2.
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.297807) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(998KB)], [47(14MB)]
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642297867, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 15733380, "oldest_snapshot_seqno": -1}
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000026:nfs.cephfs.2: -2
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5500 keys, 12269673 bytes, temperature: kUnknown
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642461739, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12269673, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12234007, "index_size": 20836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 139039, "raw_average_key_size": 25, "raw_value_size": 12135659, "raw_average_value_size": 2206, "num_data_blocks": 850, "num_entries": 5500, "num_filter_entries": 5500, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.462268) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12269673 bytes
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.464509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.8 rd, 74.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(27.4) write-amplify(12.0) OK, records in: 5980, records dropped: 480 output_compression: NoCompression
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.464539) EVENT_LOG_v1 {"time_micros": 1770026642464525, "job": 24, "event": "compaction_finished", "compaction_time_micros": 164163, "compaction_time_cpu_micros": 43053, "output_level": 6, "num_output_files": 1, "total_output_size": 12269673, "num_input_records": 5980, "num_output_records": 5500, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642464806, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026642466728, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.297699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.466997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.467004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.467006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.467008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:02.467011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:02 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5824000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:03 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58200016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:04:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:03 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5810000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:04:03 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:04:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:03.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:04.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.311 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.341 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.342 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.342 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.343 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.344 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.434289586 +0000 UTC m=+0.062115099 container create 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 05:04:04 np0005604790 systemd[1]: Started libpod-conmon-1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759.scope.
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.408102473 +0000 UTC m=+0.035928046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.531592339 +0000 UTC m=+0.159417902 container init 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:04:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 33 op/s
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.542177513 +0000 UTC m=+0.170003006 container start 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.546132059 +0000 UTC m=+0.173957622 container attach 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:04:04 np0005604790 thirsty_spence[259241]: 167 167
Feb  2 05:04:04 np0005604790 systemd[1]: libpod-1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759.scope: Deactivated successfully.
Feb  2 05:04:04 np0005604790 conmon[259241]: conmon 1aabffa6f4ae84737607 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759.scope/container/memory.events
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.550559738 +0000 UTC m=+0.178385281 container died 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:04:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-232622ffb534001e9358b2188f9abbf3e3cd348eb7b2db0ce5b1e1d7a3671bd4-merged.mount: Deactivated successfully.
Feb  2 05:04:04 np0005604790 podman[259223]: 2026-02-02 10:04:04.597973802 +0000 UTC m=+0.225799285 container remove 1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:04:04 np0005604790 systemd[1]: libpod-conmon-1aabffa6f4ae8473760794d21b3d2e746606cbc3c00e82f05e7f502d3234d759.scope: Deactivated successfully.
Feb  2 05:04:04 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 05:04:04 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:04:04 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:04 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:04 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.685 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:04 np0005604790 podman[259281]: 2026-02-02 10:04:04.784593343 +0000 UTC m=+0.055119121 container create 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:04:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:04 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100404 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:04:04 np0005604790 systemd[1]: Started libpod-conmon-990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace.scope.
Feb  2 05:04:04 np0005604790 podman[259281]: 2026-02-02 10:04:04.764911745 +0000 UTC m=+0.035437603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:04] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Feb  2 05:04:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:04] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Feb  2 05:04:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:04 np0005604790 nova_compute[252672]: 2026-02-02 10:04:04.887 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:04 np0005604790 podman[259281]: 2026-02-02 10:04:04.931671713 +0000 UTC m=+0.202197571 container init 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 05:04:04 np0005604790 podman[259281]: 2026-02-02 10:04:04.943913762 +0000 UTC m=+0.214439570 container start 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 05:04:04 np0005604790 podman[259281]: 2026-02-02 10:04:04.951008622 +0000 UTC m=+0.221534490 container attach 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.124 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.126 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4590MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.126 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.127 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.274 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.275 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:04:05 np0005604790 xenodochial_jones[259298]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:04:05 np0005604790 xenodochial_jones[259298]: --> All data devices are unavailable
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.293 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:05 np0005604790 systemd[1]: libpod-990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace.scope: Deactivated successfully.
Feb  2 05:04:05 np0005604790 podman[259281]: 2026-02-02 10:04:05.328098049 +0000 UTC m=+0.598623897 container died 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:04:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b25c4c162a007ed052491d7c7e7ea2bcffa68f761df2c8d009fea18d00a41608-merged.mount: Deactivated successfully.
Feb  2 05:04:05 np0005604790 podman[259281]: 2026-02-02 10:04:05.375372148 +0000 UTC m=+0.645897926 container remove 990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:04:05 np0005604790 systemd[1]: libpod-conmon-990a13640d37c1324ac99e63a073bedb0130b5abe04e1fc4326a76db2dd8dace.scope: Deactivated successfully.
Feb  2 05:04:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:05 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5828001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:05 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5820001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:04:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079846023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.813 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.820 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:04:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:05.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.838 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.865 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:04:05 np0005604790 nova_compute[252672]: 2026-02-02 10:04:05.866 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.038900228 +0000 UTC m=+0.053284252 container create 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:04:06 np0005604790 systemd[1]: Started libpod-conmon-06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8.scope.
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.012908079 +0000 UTC m=+0.027292163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.129028098 +0000 UTC m=+0.143412122 container init 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.138002219 +0000 UTC m=+0.152386223 container start 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.141211035 +0000 UTC m=+0.155595079 container attach 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:04:06 np0005604790 youthful_jepsen[259458]: 167 167
Feb  2 05:04:06 np0005604790 systemd[1]: libpod-06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8.scope: Deactivated successfully.
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.144918515 +0000 UTC m=+0.159302559 container died 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:04:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9c4c1d8b24893523b5b57dafdfd96d7deccefff79c2f49a6535d08d1c4cb9d67-merged.mount: Deactivated successfully.
Feb  2 05:04:06 np0005604790 podman[259442]: 2026-02-02 10:04:06.181276521 +0000 UTC m=+0.195660555 container remove 06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:04:06 np0005604790 systemd[1]: libpod-conmon-06b58cc2bf91314eb0409a051ed74200ae7fb46e76afc8bdbaefaf380281fed8.scope: Deactivated successfully.
Feb  2 05:04:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:06.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.362709503 +0000 UTC m=+0.061471371 container create a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:04:06 np0005604790 systemd[1]: Started libpod-conmon-a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b.scope.
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.338591486 +0000 UTC m=+0.037353444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075d9f75cc2e325db8ffbcc1376801de9b529496e1a4b94cc90ec38d363bae03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075d9f75cc2e325db8ffbcc1376801de9b529496e1a4b94cc90ec38d363bae03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075d9f75cc2e325db8ffbcc1376801de9b529496e1a4b94cc90ec38d363bae03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075d9f75cc2e325db8ffbcc1376801de9b529496e1a4b94cc90ec38d363bae03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.46199992 +0000 UTC m=+0.160761868 container init a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.473681614 +0000 UTC m=+0.172443512 container start a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.478846192 +0000 UTC m=+0.177608100 container attach a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 05:04:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb  2 05:04:06 np0005604790 elated_mclean[259500]: {
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:    "1": [
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:        {
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "devices": [
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "/dev/loop3"
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            ],
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "lv_name": "ceph_lv0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "lv_size": "21470642176",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "name": "ceph_lv0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "tags": {
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.cluster_name": "ceph",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.crush_device_class": "",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.encrypted": "0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.osd_id": "1",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.type": "block",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.vdo": "0",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:                "ceph.with_tpm": "0"
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            },
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "type": "block",
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:            "vg_name": "ceph_vg0"
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:        }
Feb  2 05:04:06 np0005604790 elated_mclean[259500]:    ]
Feb  2 05:04:06 np0005604790 elated_mclean[259500]: }
Feb  2 05:04:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:06 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.837 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.838 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.839 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.839 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:04:06 np0005604790 systemd[1]: libpod-a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b.scope: Deactivated successfully.
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.844822081 +0000 UTC m=+0.543584029 container died a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.857 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.860 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:06 np0005604790 nova_compute[252672]: 2026-02-02 10:04:06.861 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-075d9f75cc2e325db8ffbcc1376801de9b529496e1a4b94cc90ec38d363bae03-merged.mount: Deactivated successfully.
Feb  2 05:04:06 np0005604790 podman[259483]: 2026-02-02 10:04:06.897725941 +0000 UTC m=+0.596487839 container remove a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_mclean, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 05:04:06 np0005604790 systemd[1]: libpod-conmon-a37a2d01ce93f71b38206928a742921d539fa1fea50e097b43bca9fe4e3cc11b.scope: Deactivated successfully.
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.054 252676 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770026632.053568, e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.054 252676 INFO nova.compute.manager [-] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] VM Stopped (Lifecycle Event)#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.092 252676 DEBUG nova.compute.manager [None req-c2f8f2ba-9344-4be3-8b39-d6e7e113dc28 - - - - - -] [instance: e2df6534-c4fd-40a5-80d0-0fc8f93c2bdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:04:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:07.104Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:04:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:07.105Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.139 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:04:07 np0005604790 nova_compute[252672]: 2026-02-02 10:04:07.284 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.618387794 +0000 UTC m=+0.068907952 container create ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:04:07 np0005604790 systemd[1]: Started libpod-conmon-ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e.scope.
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.59293338 +0000 UTC m=+0.043453598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:07 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.70391245 +0000 UTC m=+0.154432588 container init ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.709035858 +0000 UTC m=+0.159556006 container start ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:04:07 np0005604790 happy_lumiere[259634]: 167 167
Feb  2 05:04:07 np0005604790 systemd[1]: libpod-ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e.scope: Deactivated successfully.
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.713972991 +0000 UTC m=+0.164493159 container attach ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.714514115 +0000 UTC m=+0.165034253 container died ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:04:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3c1e9157cf373995cdb2d49a96bc6cdeb59658f7031a0fd714e332bef4b64b12-merged.mount: Deactivated successfully.
Feb  2 05:04:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:07 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:07 np0005604790 podman[259616]: 2026-02-02 10:04:07.761854796 +0000 UTC m=+0.212374924 container remove ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_lumiere, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:04:07 np0005604790 systemd[1]: libpod-conmon-ea739a7db4f90ef352c4db2853e2561562fa4df301924fff344ac9ba0c29a53e.scope: Deactivated successfully.
Feb  2 05:04:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:07 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58280025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:07.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:07 np0005604790 podman[259659]: 2026-02-02 10:04:07.960681486 +0000 UTC m=+0.072520629 container create e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:04:08 np0005604790 systemd[1]: Started libpod-conmon-e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd.scope.
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:07.928186063 +0000 UTC m=+0.040025266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:08 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e65b7cc5a997de433a6420e4226967839dbc94326e3a6b994acd8fab6454e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e65b7cc5a997de433a6420e4226967839dbc94326e3a6b994acd8fab6454e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e65b7cc5a997de433a6420e4226967839dbc94326e3a6b994acd8fab6454e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:08 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e65b7cc5a997de433a6420e4226967839dbc94326e3a6b994acd8fab6454e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:08.066086757 +0000 UTC m=+0.177925930 container init e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:08.076234689 +0000 UTC m=+0.188073832 container start e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:08.07999659 +0000 UTC m=+0.191835803 container attach e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:04:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:04:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:04:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Feb  2 05:04:08 np0005604790 lvm[259749]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:04:08 np0005604790 lvm[259749]: VG ceph_vg0 finished
Feb  2 05:04:08 np0005604790 distracted_ellis[259675]: {}
Feb  2 05:04:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:08 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5820001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:08 np0005604790 systemd[1]: libpod-e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd.scope: Deactivated successfully.
Feb  2 05:04:08 np0005604790 systemd[1]: libpod-e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd.scope: Consumed 1.162s CPU time.
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:08.850583504 +0000 UTC m=+0.962422637 container died e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 05:04:08 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7e65b7cc5a997de433a6420e4226967839dbc94326e3a6b994acd8fab6454e4f-merged.mount: Deactivated successfully.
Feb  2 05:04:08 np0005604790 podman[259659]: 2026-02-02 10:04:08.922060994 +0000 UTC m=+1.033900117 container remove e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 05:04:08 np0005604790 systemd[1]: libpod-conmon-e66dacdbc1fc009b72e602bc4a5f67c9b18b028d0fb40dcb7fdc5e1533a266dd.scope: Deactivated successfully.
Feb  2 05:04:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:04:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:04:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:09 np0005604790 nova_compute[252672]: 2026-02-02 10:04:09.724 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:09 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:09 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:04:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:10.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.287 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.287 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.311 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.380 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.381 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.393 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.394 252676 INFO nova.compute.claims [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 05:04:10 np0005604790 nova_compute[252672]: 2026-02-02 10:04:10.495 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb  2 05:04:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:10 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58280025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:04:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023024456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.011 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.018 252676 DEBUG nova.compute.provider_tree [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.038 252676 DEBUG nova.scheduler.client.report [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.061 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.062 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.134 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.135 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.161 252676 INFO nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.184 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.273 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.275 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.275 252676 INFO nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Creating image(s)#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.309 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.336 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.362 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.366 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.446 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.447 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.448 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.449 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.479 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.484 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 3aba266a-af9d-4454-937a-ca3d562d7140_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.576 252676 DEBUG nova.policy [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b1695a2a70d4aa0aa350ba17d8f6d5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 05:04:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:11 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5820001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:11 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.797 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 3aba266a-af9d-4454-937a-ca3d562d7140_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:11 np0005604790 nova_compute[252672]: 2026-02-02 10:04:11.895 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] resizing rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.031 252676 DEBUG nova.objects.instance [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'migration_context' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.142 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.320 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.321 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Ensure instance console log exists: /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.322 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.322 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:12 np0005604790 nova_compute[252672]: 2026-02-02 10:04:12.323 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 41 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Feb  2 05:04:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:12 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:13 np0005604790 nova_compute[252672]: 2026-02-02 10:04:13.139 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Successfully created port: e8aea164-d544-4241-b141-038f3e866bd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 05:04:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:13 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58280032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:13 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5820001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:14.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb  2 05:04:14 np0005604790 nova_compute[252672]: 2026-02-02 10:04:14.774 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:14 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5810002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:14] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Feb  2 05:04:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:14] "GET /metrics HTTP/1.1" 200 48437 "" "Prometheus/2.51.0"
Feb  2 05:04:15 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:15.454 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:04:15 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:15.455 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:04:15 np0005604790 nova_compute[252672]: 2026-02-02 10:04:15.456 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:15 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:15 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58280032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:04:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:04:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:16.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:04:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:16 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f58280032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:17.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:04:17
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', '.nfs', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta']
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.144 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:04:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.482 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Successfully updated port: e8aea164-d544-4241-b141-038f3e866bd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.502 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.502 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.503 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.562 252676 DEBUG nova.compute.manager [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-changed-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.562 252676 DEBUG nova.compute.manager [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing instance network info cache due to event network-changed-e8aea164-d544-4241-b141-038f3e866bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:04:17 np0005604790 nova_compute[252672]: 2026-02-02 10:04:17.563 252676 DEBUG oslo_concurrency.lockutils [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:04:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:04:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:17 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5820001fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:17 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:17.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.015 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 05:04:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:18.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:18 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:18.458 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:04:18 np0005604790 kernel: ganesha.nfsd[259123]: segfault at 50 ip 00007f58af33d32e sp 00007f581bffe210 error 4 in libntirpc.so.5.8[7f58af322000+2c000] likely on CPU 1 (core 0, socket 1)
Feb  2 05:04:18 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 05:04:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[258783]: 02/02/2026 10:04:18 : epoch 69807686 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f580c002cb0 fd 38 proxy ignored for local
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.881 252676 DEBUG nova.network.neutron [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:18 np0005604790 systemd[1]: Started Process Core Dump (PID 260014/UID 0).
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.900 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.901 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Instance network_info: |[{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.901 252676 DEBUG oslo_concurrency.lockutils [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.901 252676 DEBUG nova.network.neutron [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing network info cache for port e8aea164-d544-4241-b141-038f3e866bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.906 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Start _get_guest_xml network_info=[{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd5e062d7-95ef-409c-9ad0-60f7cf6f44ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.913 252676 WARNING nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.918 252676 DEBUG nova.virt.libvirt.host [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.919 252676 DEBUG nova.virt.libvirt.host [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.922 252676 DEBUG nova.virt.libvirt.host [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.923 252676 DEBUG nova.virt.libvirt.host [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.924 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.924 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T10:01:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1194feb9-e285-414e-825a-1e77171d092f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.925 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.925 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.925 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.926 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.926 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.927 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.927 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.927 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.928 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.928 252676 DEBUG nova.virt.hardware [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 05:04:18 np0005604790 nova_compute[252672]: 2026-02-02 10:04:18.932 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:04:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713753610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.401 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.426 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.432 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:19 np0005604790 systemd-coredump[260015]: Process 258787 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 46:#012#0  0x00007f58af33d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 05:04:19 np0005604790 systemd[1]: systemd-coredump@10-260014-0.service: Deactivated successfully.
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.777 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:19 np0005604790 podman[260081]: 2026-02-02 10:04:19.832739045 +0000 UTC m=+0.041142556 container died 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:04:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:19.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay-63ec35c1dc6a6ee8e6f1a26dcba05125a53afd12fe829746c09a5c6e5a4157fb-merged.mount: Deactivated successfully.
Feb  2 05:04:19 np0005604790 podman[260081]: 2026-02-02 10:04:19.874854146 +0000 UTC m=+0.083257637 container remove 1c569cbf4b74162de6c410b203691b4cf42aecba100e806c72612685086acc60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 05:04:19 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 05:04:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:04:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934267532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.938 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.941 252676 DEBUG nova.virt.libvirt.vif [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:04:11Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.942 252676 DEBUG nova.network.os_vif_util [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.944 252676 DEBUG nova.network.os_vif_util [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.946 252676 DEBUG nova.objects.instance [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.975 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] End _get_guest_xml xml=<domain type="kvm">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <uuid>3aba266a-af9d-4454-937a-ca3d562d7140</uuid>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <name>instance-00000003</name>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <memory>131072</memory>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <vcpu>1</vcpu>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:name>tempest-TestNetworkBasicOps-server-1261930807</nova:name>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:creationTime>2026-02-02 10:04:18</nova:creationTime>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:flavor name="m1.nano">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:memory>128</nova:memory>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:disk>1</nova:disk>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:swap>0</nova:swap>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:vcpus>1</nova:vcpus>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </nova:flavor>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:owner>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </nova:owner>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <nova:ports>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <nova:port uuid="e8aea164-d544-4241-b141-038f3e866bd3">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        </nova:port>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </nova:ports>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </nova:instance>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <sysinfo type="smbios">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="manufacturer">RDO</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="product">OpenStack Compute</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="serial">3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="uuid">3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <entry name="family">Virtual Machine</entry>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <boot dev="hd"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <smbios mode="sysinfo"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <vmcoreinfo/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <clock offset="utc">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <timer name="hpet" present="no"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <cpu mode="host-model" match="exact">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <disk type="network" device="disk">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <target dev="vda" bus="virtio"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <disk type="network" device="cdrom">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk.config">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <target dev="sda" bus="sata"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <interface type="ethernet">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <mac address="fa:16:3e:fc:13:4a"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <mtu size="1442"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <target dev="tape8aea164-d5"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <serial type="pty">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <log file="/var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log" append="off"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <input type="tablet" bus="usb"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <rng model="virtio">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <backend model="random">/dev/urandom</backend>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <controller type="usb" index="0"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    <memballoon model="virtio">
Feb  2 05:04:19 np0005604790 nova_compute[252672]:      <stats period="10"/>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:04:19 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:04:19 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:04:19 np0005604790 nova_compute[252672]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.976 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Preparing to wait for external event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.977 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.977 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.978 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.979 252676 DEBUG nova.virt.libvirt.vif [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:04:11Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.979 252676 DEBUG nova.network.os_vif_util [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.980 252676 DEBUG nova.network.os_vif_util [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.981 252676 DEBUG os_vif [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.981 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.982 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.982 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.986 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.987 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8aea164-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.988 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape8aea164-d5, col_values=(('external_ids', {'iface-id': 'e8aea164-d544-4241-b141-038f3e866bd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:13:4a', 'vm-uuid': '3aba266a-af9d-4454-937a-ca3d562d7140'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.990 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:19 np0005604790 NetworkManager[49024]: <info>  [1770026659.9917] manager: (tape8aea164-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.994 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:04:19 np0005604790 nova_compute[252672]: 2026-02-02 10:04:19.999 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.000 252676 INFO os_vif [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5')#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.017 252676 DEBUG nova.network.neutron [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updated VIF entry in instance network info cache for port e8aea164-d544-4241-b141-038f3e866bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.017 252676 DEBUG nova.network.neutron [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.034 252676 DEBUG oslo_concurrency.lockutils [req-73011886-b9ac-4d83-9e10-3d7a25cdd707 req-42e4f4f4-fd90-44b7-bf95-71215b448c9c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.051 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.052 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.052 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:fc:13:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.053 252676 INFO nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Using config drive#033[00m
Feb  2 05:04:20 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 05:04:20 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.150s CPU time.
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.084 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:20.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.341 252676 INFO nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Creating config drive at /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.346 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5hug8zmv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.483 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5hug8zmv" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.520 252676 DEBUG nova.storage.rbd_utils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 3aba266a-af9d-4454-937a-ca3d562d7140_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.525 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config 3aba266a-af9d-4454-937a-ca3d562d7140_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:04:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.694 252676 DEBUG oslo_concurrency.processutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config 3aba266a-af9d-4454-937a-ca3d562d7140_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.695 252676 INFO nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Deleting local config drive /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/disk.config because it was imported into RBD.#033[00m
Feb  2 05:04:20 np0005604790 kernel: tape8aea164-d5: entered promiscuous mode
Feb  2 05:04:20 np0005604790 NetworkManager[49024]: <info>  [1770026660.7363] manager: (tape8aea164-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Feb  2 05:04:20 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:20Z|00038|binding|INFO|Claiming lport e8aea164-d544-4241-b141-038f3e866bd3 for this chassis.
Feb  2 05:04:20 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:20Z|00039|binding|INFO|e8aea164-d544-4241-b141-038f3e866bd3: Claiming fa:16:3e:fc:13:4a 10.100.0.12
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.781 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:20 np0005604790 systemd-machined[219024]: New machine qemu-2-instance-00000003.
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.804 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:13:4a 10.100.0.12'], port_security=['fa:16:3e:fc:13:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3aba266a-af9d-4454-937a-ca3d562d7140', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43244da2-ad24-493a-be04-b3f920faba77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b4838c9-599e-43e1-a853-e98db3d912cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=410d0273-b56a-4a25-b2e1-2c096529cc47, chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=e8aea164-d544-4241-b141-038f3e866bd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.805 165364 INFO neutron.agent.ovn.metadata.agent [-] Port e8aea164-d544-4241-b141-038f3e866bd3 in datapath 43244da2-ad24-493a-be04-b3f920faba77 bound to our chassis#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.806 165364 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43244da2-ad24-493a-be04-b3f920faba77#033[00m
Feb  2 05:04:20 np0005604790 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.817 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[09f7bcd0-cb75-40d2-b670-8769ac0e31c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.818 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43244da2-a1 in ovnmeta-43244da2-ad24-493a-be04-b3f920faba77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.820 257524 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43244da2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.820 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[d17a08e1-c146-4934-acde-d89f5fe91195]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.821 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a5db76c3-e637-42a1-aaac-06e03628abf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 systemd-udevd[260202]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.831 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[f841c2f4-8f23-4f3e-9541-0e2113f1b5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:20Z|00040|binding|INFO|Setting lport e8aea164-d544-4241-b141-038f3e866bd3 ovn-installed in OVS
Feb  2 05:04:20 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:20Z|00041|binding|INFO|Setting lport e8aea164-d544-4241-b141-038f3e866bd3 up in Southbound
Feb  2 05:04:20 np0005604790 nova_compute[252672]: 2026-02-02 10:04:20.839 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:20 np0005604790 NetworkManager[49024]: <info>  [1770026660.8415] device (tape8aea164-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:04:20 np0005604790 NetworkManager[49024]: <info>  [1770026660.8422] device (tape8aea164-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.848 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[abbc3c63-ea9c-4985-85a2-14b6594ca7c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.869 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a5b44b-f70b-46f8-bc55-324565755584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.874 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[f9224061-8034-498b-8de8-280c426ebfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 NetworkManager[49024]: <info>  [1770026660.8766] manager: (tap43244da2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.905 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[31a0896c-9f40-4429-92c0-d397676960be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.909 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[81b62a7d-b860-48d7-a25e-b4ad6646c717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 NetworkManager[49024]: <info>  [1770026660.9267] device (tap43244da2-a0): carrier: link connected
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.930 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[f3bf726a-5bd8-46eb-86c9-0d6f0fcc837a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.942 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e4daf-11fc-4e76-b7c6-94eb2bd2802c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43244da2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:69:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382021, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260233, 'error': None, 'target': 'ovnmeta-43244da2-ad24-493a-be04-b3f920faba77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.952 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[28a8cd03-a624-4a24-8398-424a1a84fec9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:699f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 382021, 'tstamp': 382021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260234, 'error': None, 'target': 'ovnmeta-43244da2-ad24-493a-be04-b3f920faba77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.964 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[40e58404-6cdd-4068-8ea9-961481feeaa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43244da2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:69:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382021, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260235, 'error': None, 'target': 'ovnmeta-43244da2-ad24-493a-be04-b3f920faba77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:20 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:20.991 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e4ce88-c8e0-49cb-9608-942a0f5cd00d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.034 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[2d91b8b7-7beb-4e91-a13a-03626dd5f7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.035 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43244da2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.036 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.036 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43244da2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:21 np0005604790 NetworkManager[49024]: <info>  [1770026661.0398] manager: (tap43244da2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Feb  2 05:04:21 np0005604790 kernel: tap43244da2-a0: entered promiscuous mode
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.039 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.042 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.045 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43244da2-a0, col_values=(('external_ids', {'iface-id': '7b523ab2-914d-4d5a-8cf0-5f452641a7fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.047 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:21 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:21Z|00042|binding|INFO|Releasing lport 7b523ab2-914d-4d5a-8cf0-5f452641a7fa from this chassis (sb_readonly=0)
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.048 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.051 165364 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43244da2-ad24-493a-be04-b3f920faba77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43244da2-ad24-493a-be04-b3f920faba77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.052 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf397d5-fcea-4326-be24-ba16d30d4b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.054 165364 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: global
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    log         /dev/log local0 debug
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    log-tag     haproxy-metadata-proxy-43244da2-ad24-493a-be04-b3f920faba77
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    user        root
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    group       root
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    maxconn     1024
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    pidfile     /var/lib/neutron/external/pids/43244da2-ad24-493a-be04-b3f920faba77.pid.haproxy
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    daemon
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: defaults
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    log global
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    mode http
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    option httplog
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    option dontlognull
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    option http-server-close
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    option forwardfor
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    retries                 3
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    timeout http-request    30s
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    timeout connect         30s
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    timeout client          32s
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    timeout server          32s
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    timeout http-keep-alive 30s
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: listen listener
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    bind 169.254.169.254:80
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]:    http-request add-header X-OVN-Network-ID 43244da2-ad24-493a-be04-b3f920faba77
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.057 252676 DEBUG nova.compute.manager [req-b2df7906-6e56-4153-a99c-fb802b302fd5 req-9ea13423-34d6-4a21-b25f-31dfc3577b7c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.057 252676 DEBUG oslo_concurrency.lockutils [req-b2df7906-6e56-4153-a99c-fb802b302fd5 req-9ea13423-34d6-4a21-b25f-31dfc3577b7c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.058 252676 DEBUG oslo_concurrency.lockutils [req-b2df7906-6e56-4153-a99c-fb802b302fd5 req-9ea13423-34d6-4a21-b25f-31dfc3577b7c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.058 252676 DEBUG oslo_concurrency.lockutils [req-b2df7906-6e56-4153-a99c-fb802b302fd5 req-9ea13423-34d6-4a21-b25f-31dfc3577b7c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.058 252676 DEBUG nova.compute.manager [req-b2df7906-6e56-4153-a99c-fb802b302fd5 req-9ea13423-34d6-4a21-b25f-31dfc3577b7c b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Processing event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.059 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:21 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:21.060 165364 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43244da2-ad24-493a-be04-b3f920faba77', 'env', 'PROCESS_TAG=haproxy-43244da2-ad24-493a-be04-b3f920faba77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43244da2-ad24-493a-be04-b3f920faba77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 05:04:21 np0005604790 podman[260298]: 2026-02-02 10:04:21.426109395 +0000 UTC m=+0.067906825 container create 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:04:21 np0005604790 systemd[1]: Started libpod-conmon-5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea.scope.
Feb  2 05:04:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.481 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026661.4806056, 3aba266a-af9d-4454-937a-ca3d562d7140 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.482 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] VM Started (Lifecycle Event)#033[00m
Feb  2 05:04:21 np0005604790 podman[260298]: 2026-02-02 10:04:21.39205172 +0000 UTC m=+0.033849120 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.486 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 05:04:21 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba4a3e39263c7ff3e897de25fdc6191ba975a22f2ebb990e9405a0250301ac3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.498 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.502 252676 INFO nova.virt.libvirt.driver [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Instance spawned successfully.#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.502 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 05:04:21 np0005604790 podman[260298]: 2026-02-02 10:04:21.506667998 +0000 UTC m=+0.148465418 container init 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.508 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.512 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:04:21 np0005604790 podman[260298]: 2026-02-02 10:04:21.515917747 +0000 UTC m=+0.157715167 container start 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.531 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.532 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.533 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.533 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.534 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.535 252676 DEBUG nova.virt.libvirt.driver [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.541 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.541 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026661.4823172, 3aba266a-af9d-4454-937a-ca3d562d7140 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.542 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] VM Paused (Lifecycle Event)#033[00m
Feb  2 05:04:21 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [NOTICE]   (260326) : New worker (260328) forked
Feb  2 05:04:21 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [NOTICE]   (260326) : Loading success.
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.582 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.604 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026661.49778, 3aba266a-af9d-4454-937a-ca3d562d7140 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.604 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] VM Resumed (Lifecycle Event)#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.616 252676 INFO nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.616 252676 DEBUG nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.627 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.630 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.648 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.683 252676 INFO nova.compute.manager [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Took 11.33 seconds to build instance.#033[00m
Feb  2 05:04:21 np0005604790 nova_compute[252672]: 2026-02-02 10:04:21.700 252676 DEBUG oslo_concurrency.lockutils [None req-8a9bc8eb-c8d7-4671-9e74-90bb9fd68177 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:21.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:22.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:22 np0005604790 podman[260339]: 2026-02-02 10:04:22.380612237 +0000 UTC m=+0.098926858 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb  2 05:04:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.126 252676 DEBUG nova.compute.manager [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.126 252676 DEBUG oslo_concurrency.lockutils [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.127 252676 DEBUG oslo_concurrency.lockutils [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.127 252676 DEBUG oslo_concurrency.lockutils [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.128 252676 DEBUG nova.compute.manager [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:04:23 np0005604790 nova_compute[252672]: 2026-02-02 10:04:23.128 252676 WARNING nova.compute.manager [req-7c5ba70e-ce1f-4aaa-a6e5-0fc2441c98b8 req-519fc16f-3d1a-4482-b808-2e17cd4519fe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:04:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:23.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:04:24 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:24Z|00043|binding|INFO|Releasing lport 7b523ab2-914d-4d5a-8cf0-5f452641a7fa from this chassis (sb_readonly=0)
Feb  2 05:04:24 np0005604790 nova_compute[252672]: 2026-02-02 10:04:24.662 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:24 np0005604790 NetworkManager[49024]: <info>  [1770026664.6654] manager: (patch-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Feb  2 05:04:24 np0005604790 NetworkManager[49024]: <info>  [1770026664.6663] manager: (patch-br-int-to-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb  2 05:04:24 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:24Z|00044|binding|INFO|Releasing lport 7b523ab2-914d-4d5a-8cf0-5f452641a7fa from this chassis (sb_readonly=0)
Feb  2 05:04:24 np0005604790 nova_compute[252672]: 2026-02-02 10:04:24.669 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:24 np0005604790 nova_compute[252672]: 2026-02-02 10:04:24.780 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100424 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:04:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:24] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:04:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:24] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:04:24 np0005604790 nova_compute[252672]: 2026-02-02 10:04:24.990 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:25 np0005604790 nova_compute[252672]: 2026-02-02 10:04:25.232 252676 DEBUG nova.compute.manager [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-changed-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:25 np0005604790 nova_compute[252672]: 2026-02-02 10:04:25.232 252676 DEBUG nova.compute.manager [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing instance network info cache due to event network-changed-e8aea164-d544-4241-b141-038f3e866bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:04:25 np0005604790 nova_compute[252672]: 2026-02-02 10:04:25.233 252676 DEBUG oslo_concurrency.lockutils [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:25 np0005604790 nova_compute[252672]: 2026-02-02 10:04:25.234 252676 DEBUG oslo_concurrency.lockutils [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:25 np0005604790 nova_compute[252672]: 2026-02-02 10:04:25.234 252676 DEBUG nova.network.neutron [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing network info cache for port e8aea164-d544-4241-b141-038f3e866bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:04:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:25.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:26 np0005604790 nova_compute[252672]: 2026-02-02 10:04:26.336 252676 DEBUG nova.network.neutron [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updated VIF entry in instance network info cache for port e8aea164-d544-4241-b141-038f3e866bd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:04:26 np0005604790 nova_compute[252672]: 2026-02-02 10:04:26.337 252676 DEBUG nova.network.neutron [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:26 np0005604790 nova_compute[252672]: 2026-02-02 10:04:26.364 252676 DEBUG oslo_concurrency.lockutils [req-7c8ee806-708e-463b-bb53-88e8fd5a2703 req-fa365fe0-531c-49c7-b3bf-c6e89b8d8bbb b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:04:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:27.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:04:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:27.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:04:29 np0005604790 podman[260376]: 2026-02-02 10:04:29.345324572 +0000 UTC m=+0.055296206 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:04:29 np0005604790 nova_compute[252672]: 2026-02-02 10:04:29.823 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:04:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:04:29 np0005604790 nova_compute[252672]: 2026-02-02 10:04:29.992 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:30 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 11.
Feb  2 05:04:30 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:04:30 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.150s CPU time.
Feb  2 05:04:30 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 05:04:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:30.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:30 np0005604790 podman[260447]: 2026-02-02 10:04:30.491647737 +0000 UTC m=+0.053285332 container create 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Feb  2 05:04:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9561751e4623e3e7b37c64f494d5a2bba79e6d84e8b5460a4dee8f04c63918b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9561751e4623e3e7b37c64f494d5a2bba79e6d84e8b5460a4dee8f04c63918b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9561751e4623e3e7b37c64f494d5a2bba79e6d84e8b5460a4dee8f04c63918b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9561751e4623e3e7b37c64f494d5a2bba79e6d84e8b5460a4dee8f04c63918b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:04:30 np0005604790 podman[260447]: 2026-02-02 10:04:30.464530268 +0000 UTC m=+0.026167893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:04:30 np0005604790 podman[260447]: 2026-02-02 10:04:30.560968268 +0000 UTC m=+0.122605883 container init 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:04:30 np0005604790 podman[260447]: 2026-02-02 10:04:30.565153501 +0000 UTC m=+0.126791096 container start 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Feb  2 05:04:30 np0005604790 bash[260447]: 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 05:04:30 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 05:04:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:04:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:04:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:04:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:04:33 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 05:04:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:04:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:33.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:04:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 109 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Feb  2 05:04:34 np0005604790 nova_compute[252672]: 2026-02-02 10:04:34.827 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:34] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb  2 05:04:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:34] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb  2 05:04:34 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:34Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:13:4a 10.100.0.12
Feb  2 05:04:34 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:34Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:13:4a 10.100.0.12
Feb  2 05:04:34 np0005604790 nova_compute[252672]: 2026-02-02 10:04:34.994 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:04:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:04:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 109 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Feb  2 05:04:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:36 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:04:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:36 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:04:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:37.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:04:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:37.108Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:04:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:37.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:04:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:39 np0005604790 nova_compute[252672]: 2026-02-02 10:04:39.871 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:39 np0005604790 nova_compute[252672]: 2026-02-02 10:04:39.996 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:04:41 np0005604790 nova_compute[252672]: 2026-02-02 10:04:41.474 252676 INFO nova.compute.manager [None req-9d90ebd1-4fff-41b5-990e-0c150b1f0999 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Get console output#033[00m
Feb  2 05:04:41 np0005604790 nova_compute[252672]: 2026-02-02 10:04:41.482 258300 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb  2 05:04:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:41.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:04:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428001e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:43.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb  2 05:04:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100444 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:04:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:44 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:44] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb  2 05:04:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:44] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Feb  2 05:04:44 np0005604790 nova_compute[252672]: 2026-02-02 10:04:44.908 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:44 np0005604790 nova_compute[252672]: 2026-02-02 10:04:44.998 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:45.376 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:45.377 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:45.377 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:45.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 108 KiB/s wr, 25 op/s
Feb  2 05:04:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:46 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:04:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:47.111Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:04:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:47.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:04:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.299870) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687299974, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 704, "num_deletes": 256, "total_data_size": 1019750, "memory_usage": 1034472, "flush_reason": "Manual Compaction"}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687314141, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1010544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23174, "largest_seqno": 23877, "table_properties": {"data_size": 1006812, "index_size": 1512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8235, "raw_average_key_size": 18, "raw_value_size": 999314, "raw_average_value_size": 2266, "num_data_blocks": 65, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026642, "oldest_key_time": 1770026642, "file_creation_time": 1770026687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 14340 microseconds, and 5136 cpu microseconds.
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.314224) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1010544 bytes OK
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.314263) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.316539) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.316562) EVENT_LOG_v1 {"time_micros": 1770026687316555, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.316592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1016114, prev total WAL file size 1016114, number of live WAL files 2.
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.317347) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(986KB)], [50(11MB)]
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687317405, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13280217, "oldest_snapshot_seqno": -1}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5412 keys, 13121329 bytes, temperature: kUnknown
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687421182, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13121329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13084971, "index_size": 21720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138365, "raw_average_key_size": 25, "raw_value_size": 12986903, "raw_average_value_size": 2399, "num_data_blocks": 886, "num_entries": 5412, "num_filter_entries": 5412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.421549) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13121329 bytes
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.423089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.9 rd, 126.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.7 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(26.1) write-amplify(13.0) OK, records in: 5941, records dropped: 529 output_compression: NoCompression
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.423106) EVENT_LOG_v1 {"time_micros": 1770026687423097, "job": 26, "event": "compaction_finished", "compaction_time_micros": 103868, "compaction_time_cpu_micros": 36116, "output_level": 6, "num_output_files": 1, "total_output_size": 13121329, "num_input_records": 5941, "num_output_records": 5412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687423369, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026687424915, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.317160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.424985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.424995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.424998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.425001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:04:47.425004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:04:47 np0005604790 nova_compute[252672]: 2026-02-02 10:04:47.548 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:47 np0005604790 nova_compute[252672]: 2026-02-02 10:04:47.549 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:47 np0005604790 nova_compute[252672]: 2026-02-02 10:04:47.550 252676 DEBUG nova.objects.instance [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'flavor' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:04:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:47.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:47 np0005604790 nova_compute[252672]: 2026-02-02 10:04:47.953 252676 DEBUG nova.objects.instance [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'pci_requests' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:04:47 np0005604790 nova_compute[252672]: 2026-02-02 10:04:47.967 252676 DEBUG nova.network.neutron [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 05:04:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 111 KiB/s wr, 25 op/s
Feb  2 05:04:48 np0005604790 nova_compute[252672]: 2026-02-02 10:04:48.571 252676 DEBUG nova.policy [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b1695a2a70d4aa0aa350ba17d8f6d5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 05:04:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:48 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:49 np0005604790 nova_compute[252672]: 2026-02-02 10:04:49.044 252676 DEBUG nova.network.neutron [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Successfully created port: 9a348207-ae0a-4c8e-b379-80035923d778 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 05:04:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:49.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:49 np0005604790 nova_compute[252672]: 2026-02-02 10:04:49.909 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.000 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.049 252676 DEBUG nova.network.neutron [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Successfully updated port: 9a348207-ae0a-4c8e-b379-80035923d778 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.071 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.071 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.072 252676 DEBUG nova.network.neutron [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.138 252676 DEBUG nova.compute.manager [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-changed-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.139 252676 DEBUG nova.compute.manager [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing instance network info cache due to event network-changed-9a348207-ae0a-4c8e-b379-80035923d778. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:04:50 np0005604790 nova_compute[252672]: 2026-02-02 10:04:50.139 252676 DEBUG oslo_concurrency.lockutils [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:50.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 1 op/s
Feb  2 05:04:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:50 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:51.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 15 KiB/s wr, 1 op/s
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.571 252676 DEBUG nova.network.neutron [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.592 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.593 252676 DEBUG oslo_concurrency.lockutils [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.593 252676 DEBUG nova.network.neutron [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing network info cache for port 9a348207-ae0a-4c8e-b379-80035923d778 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.596 252676 DEBUG nova.virt.libvirt.vif [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:04:21Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.596 252676 DEBUG nova.network.os_vif_util [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.597 252676 DEBUG nova.network.os_vif_util [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.597 252676 DEBUG os_vif [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.598 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.598 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.598 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.602 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.602 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a348207-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.602 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a348207-ae, col_values=(('external_ids', {'iface-id': '9a348207-ae0a-4c8e-b379-80035923d778', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:65:2e', 'vm-uuid': '3aba266a-af9d-4454-937a-ca3d562d7140'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.6061] manager: (tap9a348207-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.609 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.613 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.614 252676 INFO os_vif [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae')#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.615 252676 DEBUG nova.virt.libvirt.vif [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:04:21Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.616 252676 DEBUG nova.network.os_vif_util [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.617 252676 DEBUG nova.network.os_vif_util [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.620 252676 DEBUG nova.virt.libvirt.guest [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] attach device xml: <interface type="ethernet">
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <mac address="fa:16:3e:d8:65:2e"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <model type="virtio"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <mtu size="1442"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <target dev="tap9a348207-ae"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]: </interface>
Feb  2 05:04:52 np0005604790 nova_compute[252672]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb  2 05:04:52 np0005604790 kernel: tap9a348207-ae: entered promiscuous mode
Feb  2 05:04:52 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:52Z|00045|binding|INFO|Claiming lport 9a348207-ae0a-4c8e-b379-80035923d778 for this chassis.
Feb  2 05:04:52 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:52Z|00046|binding|INFO|9a348207-ae0a-4c8e-b379-80035923d778: Claiming fa:16:3e:d8:65:2e 10.100.0.23
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.6382] manager: (tap9a348207-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.637 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.648 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:65:2e 10.100.0.23'], port_security=['fa:16:3e:d8:65:2e 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '3aba266a-af9d-4454-937a-ca3d562d7140', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '22473684-a0d2-4e4f-b1c5-3e6fdbc49578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=198936d7-9859-45c5-96c4-3b0e54e64201, chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=9a348207-ae0a-4c8e-b379-80035923d778) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.650 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 9a348207-ae0a-4c8e-b379-80035923d778 in datapath 2c51a04b-2353-4ec7-9aa3-a143234fb3c5 bound to our chassis#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.652 165364 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2c51a04b-2353-4ec7-9aa3-a143234fb3c5#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.659 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:52Z|00047|binding|INFO|Setting lport 9a348207-ae0a-4c8e-b379-80035923d778 ovn-installed in OVS
Feb  2 05:04:52 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:52Z|00048|binding|INFO|Setting lport 9a348207-ae0a-4c8e-b379-80035923d778 up in Southbound
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.667 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[385e6232-7115-4430-b1ba-09d90339f6f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.666 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.668 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2c51a04b-21 in ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.671 257524 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2c51a04b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.671 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[847213c0-c6ca-4295-a030-bca9faf80b1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.673 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e78bf2e9-37d4-42f0-818d-28d3601b45f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 systemd-udevd[260583]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.687 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[2802d9fd-fac4-4e05-8409-144acef16833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.7024] device (tap9a348207-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.7031] device (tap9a348207-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.716 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe3eaeb-2972-45d1-81c6-63f864e5ea81]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.733 252676 DEBUG nova.virt.libvirt.driver [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.733 252676 DEBUG nova.virt.libvirt.driver [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.733 252676 DEBUG nova.virt.libvirt.driver [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:fc:13:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.733 252676 DEBUG nova.virt.libvirt.driver [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:d8:65:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.747 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ce4191-22ea-4f55-aa82-c6675578e41c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.7564] manager: (tap2c51a04b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Feb  2 05:04:52 np0005604790 systemd-udevd[260587]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.755 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[76c5ed9b-59a2-40d6-a6f8-3367491da771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.764 252676 DEBUG nova.virt.libvirt.guest [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:name>tempest-TestNetworkBasicOps-server-1261930807</nova:name>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:creationTime>2026-02-02 10:04:52</nova:creationTime>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:flavor name="m1.nano">
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:memory>128</nova:memory>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:disk>1</nova:disk>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:swap>0</nova:swap>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:vcpus>1</nova:vcpus>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  </nova:flavor>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:owner>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  </nova:owner>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  <nova:ports>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:port uuid="e8aea164-d544-4241-b141-038f3e866bd3">
Feb  2 05:04:52 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    <nova:port uuid="9a348207-ae0a-4c8e-b379-80035923d778">
Feb  2 05:04:52 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:52 np0005604790 nova_compute[252672]:  </nova:ports>
Feb  2 05:04:52 np0005604790 nova_compute[252672]: </nova:instance>
Feb  2 05:04:52 np0005604790 nova_compute[252672]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.793 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[903c6d88-1301-4178-ba60-a105e8107dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.796 252676 DEBUG oslo_concurrency.lockutils [None req-13bc4756-9dc5-4aca-8c77-bd2489bb7d54 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.797 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaedc26-b2d4-4917-9bb3-9c32b70d85aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 podman[260573]: 2026-02-02 10:04:52.799459771 +0000 UTC m=+0.132567021 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.8234] device (tap2c51a04b-20): carrier: link connected
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.831 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[ade67bc0-7985-4d2b-8c41-ff65f3d8d109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.851 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[ed262087-d0bb-4a97-8f4d-4c617ebe5eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c51a04b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:42:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385211, 'reachable_time': 24878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260624, 'error': None, 'target': 'ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:52 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.869 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[c98b90a1-0f30-4910-ba9c-c466f4c90f6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:420a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 385211, 'tstamp': 385211}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260625, 'error': None, 'target': 'ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.890 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[f010197a-4369-42f2-bc9c-92841a3b9721]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2c51a04b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:42:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385211, 'reachable_time': 24878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260626, 'error': None, 'target': 'ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.918 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b70df7-a027-4216-9343-d55697d9da4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.958 252676 DEBUG nova.compute.manager [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.959 252676 DEBUG oslo_concurrency.lockutils [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.960 252676 DEBUG oslo_concurrency.lockutils [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.960 252676 DEBUG oslo_concurrency.lockutils [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.960 252676 DEBUG nova.compute.manager [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.960 252676 WARNING nova.compute.manager [req-ab1f8261-25d8-4225-a88a-0ed21d9101b9 req-f5975c97-7db1-49fa-a6a0-80fe3abe5a4d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.976 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[0aae96ed-78a8-484a-913b-9d2cc8ffe4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.978 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c51a04b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.979 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.980 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c51a04b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 kernel: tap2c51a04b-20: entered promiscuous mode
Feb  2 05:04:52 np0005604790 NetworkManager[49024]: <info>  [1770026692.9832] manager: (tap2c51a04b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.982 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.985 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.990 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2c51a04b-20, col_values=(('external_ids', {'iface-id': '4d6c5fa0-f074-4fd8-8997-af06360e1bcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.992 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:52 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:52Z|00049|binding|INFO|Releasing lport 4d6c5fa0-f074-4fd8-8997-af06360e1bcc from this chassis (sb_readonly=0)
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.993 165364 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2c51a04b-2353-4ec7-9aa3-a143234fb3c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2c51a04b-2353-4ec7-9aa3-a143234fb3c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.994 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[80b11394-4d08-46e9-b663-3941e134cd04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.995 165364 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: global
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    log         /dev/log local0 debug
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    log-tag     haproxy-metadata-proxy-2c51a04b-2353-4ec7-9aa3-a143234fb3c5
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    user        root
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    group       root
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    maxconn     1024
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    pidfile     /var/lib/neutron/external/pids/2c51a04b-2353-4ec7-9aa3-a143234fb3c5.pid.haproxy
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    daemon
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: defaults
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    log global
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    mode http
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    option httplog
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    option dontlognull
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    option http-server-close
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    option forwardfor
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    retries                 3
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    timeout http-request    30s
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    timeout connect         30s
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    timeout client          32s
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    timeout server          32s
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    timeout http-keep-alive 30s
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: listen listener
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    bind 169.254.169.254:80
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]:    http-request add-header X-OVN-Network-ID 2c51a04b-2353-4ec7-9aa3-a143234fb3c5
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 05:04:52 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:52.996 165364 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'env', 'PROCESS_TAG=haproxy-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2c51a04b-2353-4ec7-9aa3-a143234fb3c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 05:04:52 np0005604790 nova_compute[252672]: 2026-02-02 10:04:52.997 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:53 np0005604790 podman[260658]: 2026-02-02 10:04:53.410164111 +0000 UTC m=+0.070220616 container create 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:04:53 np0005604790 systemd[1]: Started libpod-conmon-292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd.scope.
Feb  2 05:04:53 np0005604790 podman[260658]: 2026-02-02 10:04:53.373524468 +0000 UTC m=+0.033581023 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 05:04:53 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:04:53 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39512e8e2a2eec41d0c8362a2f35a6321c62a1b5a919041e1bff41a622c56e17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 05:04:53 np0005604790 podman[260658]: 2026-02-02 10:04:53.501305329 +0000 UTC m=+0.161361814 container init 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:04:53 np0005604790 podman[260658]: 2026-02-02 10:04:53.508686487 +0000 UTC m=+0.168742962 container start 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 05:04:53 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [NOTICE]   (260678) : New worker (260680) forked
Feb  2 05:04:53 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [NOTICE]   (260678) : Loading success.
Feb  2 05:04:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:53.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.147 252676 DEBUG nova.network.neutron [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updated VIF entry in instance network info cache for port 9a348207-ae0a-4c8e-b379-80035923d778. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.148 252676 DEBUG nova.network.neutron [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.173 252676 DEBUG oslo_concurrency.lockutils [req-f61f81bb-4e83-444a-92b2-604d3f03cdb8 req-37cfd3fe-5f66-4a17-8793-0c46bfbc609e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:54.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.534 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-9a348207-ae0a-4c8e-b379-80035923d778" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.535 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-9a348207-ae0a-4c8e-b379-80035923d778" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.560 252676 DEBUG nova.objects.instance [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'flavor' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:04:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 16 KiB/s wr, 1 op/s
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.587 252676 DEBUG nova.virt.libvirt.vif [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:04:21Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.589 252676 DEBUG nova.network.os_vif_util [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.590 252676 DEBUG nova.network.os_vif_util [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.596 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.601 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.605 252676 DEBUG nova.virt.libvirt.driver [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Attempting to detach device tap9a348207-ae from instance 3aba266a-af9d-4454-937a-ca3d562d7140 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.606 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] detach device xml: <interface type="ethernet">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <mac address="fa:16:3e:d8:65:2e"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <model type="virtio"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <mtu size="1442"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <target dev="tap9a348207-ae"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </interface>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.615 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.621 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <name>instance-00000003</name>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <uuid>3aba266a-af9d-4454-937a-ca3d562d7140</uuid>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:name>tempest-TestNetworkBasicOps-server-1261930807</nova:name>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:creationTime>2026-02-02 10:04:52</nova:creationTime>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:flavor name="m1.nano">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:memory>128</nova:memory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:disk>1</nova:disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:swap>0</nova:swap>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:vcpus>1</nova:vcpus>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:flavor>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:port uuid="e8aea164-d544-4241-b141-038f3e866bd3">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:port uuid="9a348207-ae0a-4c8e-b379-80035923d778">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </nova:instance>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <memory unit='KiB'>131072</memory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <vcpu placement='static'>1</vcpu>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <resource>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <partition>/machine</partition>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </resource>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <sysinfo type='smbios'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='manufacturer'>RDO</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='product'>OpenStack Compute</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='serial'>3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='uuid'>3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='family'>Virtual Machine</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <boot dev='hd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <smbios mode='sysinfo'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <vmcoreinfo state='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <cpu mode='custom' match='exact' check='full'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <model fallback='forbid'>EPYC-Rome</model>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <vendor>AMD</vendor>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='x2apic'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='tsc-deadline'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='hypervisor'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='tsc_adjust'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='spec-ctrl'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='stibp'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='cmp_legacy'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='overflow-recov'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='succor'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='ibrs'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='amd-ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='virt-ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='lbrv'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='tsc-scale'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='vmcb-clean'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='flushbyasid'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='pause-filter'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='pfthreshold'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='svme-addr-chk'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='lfence-always-serializing'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='xsaves'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='svm'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='topoext'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='npt'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='nrip-save'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <clock offset='utc'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='pit' tickpolicy='delay'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='rtc' tickpolicy='catchup'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='hpet' present='no'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_poweroff>destroy</on_poweroff>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_reboot>restart</on_reboot>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_crash>destroy</on_crash>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <disk type='network' device='disk'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='qemu' type='raw' cache='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <auth username='openstack'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <secret type='ceph' uuid='d241d473-9fcb-5f74-b163-f1ca4454e7f1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source protocol='rbd' name='vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk' index='2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.100' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.102' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.101' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='vda' bus='virtio'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='virtio-disk0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <disk type='network' device='cdrom'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='qemu' type='raw' cache='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <auth username='openstack'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <secret type='ceph' uuid='d241d473-9fcb-5f74-b163-f1ca4454e7f1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source protocol='rbd' name='vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk.config' index='1'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.100' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.102' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.101' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='sda' bus='sata'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <readonly/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='sata0-0-0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='0' model='pcie-root'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pcie.0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='1' port='0x10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='2' port='0x11'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='3' port='0x12'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='4' port='0x13'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='5' port='0x14'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='6' port='0x15'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='7' port='0x16'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='8' port='0x17'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.8'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='9' port='0x18'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.9'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='10' port='0x19'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='11' port='0x1a'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.11'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='12' port='0x1b'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.12'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='13' port='0x1c'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.13'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='14' port='0x1d'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.14'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='15' port='0x1e'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.15'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='16' port='0x1f'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.16'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='17' port='0x20'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.17'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='18' port='0x21'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.18'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='19' port='0x22'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.19'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='20' port='0x23'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.20'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='21' port='0x24'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.21'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='22' port='0x25'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.22'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='23' port='0x26'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.23'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='24' port='0x27'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.24'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='25' port='0x28'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.25'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-pci-bridge'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.26'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='usb'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='sata' index='0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='ide'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <interface type='ethernet'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mac address='fa:16:3e:fc:13:4a'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='tape8aea164-d5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model type='virtio'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='vhost' rx_queue_size='512'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mtu size='1442'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='net0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <interface type='ethernet'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mac address='fa:16:3e:d8:65:2e'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='tap9a348207-ae'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model type='virtio'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='vhost' rx_queue_size='512'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mtu size='1442'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='net1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <serial type='pty'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source path='/dev/pts/0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <log file='/var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log' append='off'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target type='isa-serial' port='0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <model name='isa-serial'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </target>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='serial0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <console type='pty' tty='/dev/pts/0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source path='/dev/pts/0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <log file='/var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log' append='off'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target type='serial' port='0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='serial0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </console>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='tablet' bus='usb'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='usb' bus='0' port='1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='mouse' bus='ps2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='keyboard' bus='ps2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <listen type='address' address='::0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </graphics>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <audio id='1' type='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model type='virtio' heads='1' primary='yes'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='video0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <watchdog model='itco' action='reset'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='watchdog0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </watchdog>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <memballoon model='virtio'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <stats period='10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='balloon0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <rng model='virtio'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <backend model='random'>/dev/urandom</backend>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='rng0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <label>system_u:system_r:svirt_t:s0:c637,c743</label>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c637,c743</imagelabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </seclabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <label>+107:+107</label>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <imagelabel>+107:+107</imagelabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </seclabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.622 252676 INFO nova.virt.libvirt.driver [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully detached device tap9a348207-ae from instance 3aba266a-af9d-4454-937a-ca3d562d7140 from the persistent domain config.#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.622 252676 DEBUG nova.virt.libvirt.driver [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] (1/8): Attempting to detach device tap9a348207-ae with device alias net1 from instance 3aba266a-af9d-4454-937a-ca3d562d7140 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.623 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] detach device xml: <interface type="ethernet">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <mac address="fa:16:3e:d8:65:2e"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <model type="virtio"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <mtu size="1442"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <target dev="tap9a348207-ae"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </interface>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb  2 05:04:54 np0005604790 kernel: tap9a348207-ae (unregistering): left promiscuous mode
Feb  2 05:04:54 np0005604790 NetworkManager[49024]: <info>  [1770026694.6692] device (tap9a348207-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.679 252676 DEBUG nova.virt.libvirt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Received event <DeviceRemovedEvent: 1770026694.679031, 3aba266a-af9d-4454-937a-ca3d562d7140 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.681 252676 DEBUG nova.virt.libvirt.driver [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Start waiting for the detach event from libvirt for device tap9a348207-ae with device alias net1 for instance 3aba266a-af9d-4454-937a-ca3d562d7140 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.682 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.687 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d8:65:2e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a348207-ae"/></interface>not found in domain: <domain type='kvm' id='2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <name>instance-00000003</name>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <uuid>3aba266a-af9d-4454-937a-ca3d562d7140</uuid>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:name>tempest-TestNetworkBasicOps-server-1261930807</nova:name>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:creationTime>2026-02-02 10:04:52</nova:creationTime>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:flavor name="m1.nano">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:memory>128</nova:memory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:disk>1</nova:disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:swap>0</nova:swap>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:vcpus>1</nova:vcpus>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:flavor>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:port uuid="e8aea164-d544-4241-b141-038f3e866bd3">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:port uuid="9a348207-ae0a-4c8e-b379-80035923d778">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </nova:instance>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <memory unit='KiB'>131072</memory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <vcpu placement='static'>1</vcpu>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <resource>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <partition>/machine</partition>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </resource>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <sysinfo type='smbios'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='manufacturer'>RDO</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='product'>OpenStack Compute</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='serial'>3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='uuid'>3aba266a-af9d-4454-937a-ca3d562d7140</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <entry name='family'>Virtual Machine</entry>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <boot dev='hd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <smbios mode='sysinfo'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <vmcoreinfo state='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <cpu mode='custom' match='exact' check='full'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <model fallback='forbid'>EPYC-Rome</model>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <vendor>AMD</vendor>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='x2apic'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='tsc-deadline'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='hypervisor'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='tsc_adjust'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='spec-ctrl'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='stibp'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='cmp_legacy'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='overflow-recov'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='succor'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='ibrs'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='amd-ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='virt-ssbd'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='lbrv'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='tsc-scale'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='vmcb-clean'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='flushbyasid'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='pause-filter'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='pfthreshold'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='svme-addr-chk'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='lfence-always-serializing'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='xsaves'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='svm'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='require' name='topoext'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='npt'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <feature policy='disable' name='nrip-save'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <clock offset='utc'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='pit' tickpolicy='delay'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='rtc' tickpolicy='catchup'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <timer name='hpet' present='no'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_poweroff>destroy</on_poweroff>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_reboot>restart</on_reboot>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <on_crash>destroy</on_crash>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <disk type='network' device='disk'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='qemu' type='raw' cache='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <auth username='openstack'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <secret type='ceph' uuid='d241d473-9fcb-5f74-b163-f1ca4454e7f1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source protocol='rbd' name='vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk' index='2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.100' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.102' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.101' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='vda' bus='virtio'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='virtio-disk0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <disk type='network' device='cdrom'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='qemu' type='raw' cache='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <auth username='openstack'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <secret type='ceph' uuid='d241d473-9fcb-5f74-b163-f1ca4454e7f1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source protocol='rbd' name='vms/3aba266a-af9d-4454-937a-ca3d562d7140_disk.config' index='1'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.100' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.102' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <host name='192.168.122.101' port='6789'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='sda' bus='sata'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <readonly/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='sata0-0-0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='0' model='pcie-root'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pcie.0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='1' port='0x10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='2' port='0x11'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='3' port='0x12'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='4' port='0x13'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='5' port='0x14'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='6' port='0x15'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='7' port='0x16'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='8' port='0x17'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.8'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='9' port='0x18'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.9'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='10' port='0x19'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='11' port='0x1a'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.11'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='12' port='0x1b'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.12'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='13' port='0x1c'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.13'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='14' port='0x1d'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.14'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='15' port='0x1e'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.15'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='16' port='0x1f'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.16'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='17' port='0x20'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.17'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='18' port='0x21'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.18'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='19' port='0x22'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.19'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='20' port='0x23'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.20'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='21' port='0x24'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.21'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='22' port='0x25'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.22'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='23' port='0x26'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.23'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='24' port='0x27'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.24'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-root-port'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target chassis='25' port='0x28'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.25'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model name='pcie-pci-bridge'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='pci.26'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='usb'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <controller type='sata' index='0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='ide'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </controller>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <interface type='ethernet'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mac address='fa:16:3e:fc:13:4a'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target dev='tape8aea164-d5'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model type='virtio'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <driver name='vhost' rx_queue_size='512'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <mtu size='1442'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='net0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <serial type='pty'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source path='/dev/pts/0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <log file='/var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log' append='off'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target type='isa-serial' port='0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:        <model name='isa-serial'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      </target>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='serial0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <console type='pty' tty='/dev/pts/0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <source path='/dev/pts/0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <log file='/var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140/console.log' append='off'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <target type='serial' port='0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='serial0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </console>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='tablet' bus='usb'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='usb' bus='0' port='1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='mouse' bus='ps2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input1'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <input type='keyboard' bus='ps2'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='input2'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </input>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <listen type='address' address='::0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </graphics>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <audio id='1' type='none'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <model type='virtio' heads='1' primary='yes'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='video0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <watchdog model='itco' action='reset'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='watchdog0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </watchdog>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <memballoon model='virtio'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <stats period='10'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='balloon0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <rng model='virtio'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <backend model='random'>/dev/urandom</backend>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <alias name='rng0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <label>system_u:system_r:svirt_t:s0:c637,c743</label>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c637,c743</imagelabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </seclabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <label>+107:+107</label>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <imagelabel>+107:+107</imagelabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </seclabel>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.687 252676 INFO nova.virt.libvirt.driver [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully detached device tap9a348207-ae from instance 3aba266a-af9d-4454-937a-ca3d562d7140 from the live domain config.#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.688 252676 DEBUG nova.virt.libvirt.vif [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:04:21Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.689 252676 DEBUG nova.network.os_vif_util [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "9a348207-ae0a-4c8e-b379-80035923d778", "address": "fa:16:3e:d8:65:2e", "network": {"id": "2c51a04b-2353-4ec7-9aa3-a143234fb3c5", "bridge": "br-int", "label": "tempest-network-smoke--1225045414", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a348207-ae", "ovs_interfaceid": "9a348207-ae0a-4c8e-b379-80035923d778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.690 252676 DEBUG nova.network.os_vif_util [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.690 252676 DEBUG os_vif [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.693 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.693 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a348207-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:54 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:54Z|00050|binding|INFO|Releasing lport 9a348207-ae0a-4c8e-b379-80035923d778 from this chassis (sb_readonly=0)
Feb  2 05:04:54 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:54Z|00051|binding|INFO|Setting lport 9a348207-ae0a-4c8e-b379-80035923d778 down in Southbound
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.716 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:54 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:54Z|00052|binding|INFO|Removing iface tap9a348207-ae ovn-installed in OVS
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.720 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.726 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.728 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.729 252676 INFO os_vif [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:65:2e,bridge_name='br-int',has_traffic_filtering=True,id=9a348207-ae0a-4c8e-b379-80035923d778,network=Network(2c51a04b-2353-4ec7-9aa3-a143234fb3c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a348207-ae')#033[00m
Feb  2 05:04:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:54.729 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:65:2e 10.100.0.23'], port_security=['fa:16:3e:d8:65:2e 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': '3aba266a-af9d-4454-937a-ca3d562d7140', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '22473684-a0d2-4e4f-b1c5-3e6fdbc49578', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=198936d7-9859-45c5-96c4-3b0e54e64201, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=9a348207-ae0a-4c8e-b379-80035923d778) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.730 252676 DEBUG nova.virt.libvirt.guest [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:name>tempest-TestNetworkBasicOps-server-1261930807</nova:name>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:creationTime>2026-02-02 10:04:54</nova:creationTime>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:flavor name="m1.nano">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:memory>128</nova:memory>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:disk>1</nova:disk>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:swap>0</nova:swap>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:vcpus>1</nova:vcpus>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:flavor>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:owner>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  <nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    <nova:port uuid="e8aea164-d544-4241-b141-038f3e866bd3">
Feb  2 05:04:54 np0005604790 nova_compute[252672]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:    </nova:port>
Feb  2 05:04:54 np0005604790 nova_compute[252672]:  </nova:ports>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: </nova:instance>
Feb  2 05:04:54 np0005604790 nova_compute[252672]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb  2 05:04:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:54.731 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 9a348207-ae0a-4c8e-b379-80035923d778 in datapath 2c51a04b-2353-4ec7-9aa3-a143234fb3c5 unbound from our chassis#033[00m
Feb  2 05:04:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:54.733 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2c51a04b-2353-4ec7-9aa3-a143234fb3c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:04:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:54.734 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[5593cc86-5585-458d-bb64-3d1db3e88e0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:54.735 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5 namespace which is not needed anymore#033[00m
Feb  2 05:04:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:54 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:54] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb  2 05:04:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:04:54] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [NOTICE]   (260678) : haproxy version is 2.8.14-c23fe91
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [NOTICE]   (260678) : path to executable is /usr/sbin/haproxy
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [WARNING]  (260678) : Exiting Master process...
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [WARNING]  (260678) : Exiting Master process...
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [ALERT]    (260678) : Current worker (260680) exited with code 143 (Terminated)
Feb  2 05:04:54 np0005604790 neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5[260674]: [WARNING]  (260678) : All workers exited. Exiting... (0)
Feb  2 05:04:54 np0005604790 systemd[1]: libpod-292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd.scope: Deactivated successfully.
Feb  2 05:04:54 np0005604790 podman[260713]: 2026-02-02 10:04:54.894717818 +0000 UTC m=+0.059427477 container died 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:04:54 np0005604790 nova_compute[252672]: 2026-02-02 10:04:54.911 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:54 np0005604790 systemd[1]: var-lib-containers-storage-overlay-39512e8e2a2eec41d0c8362a2f35a6321c62a1b5a919041e1bff41a622c56e17-merged.mount: Deactivated successfully.
Feb  2 05:04:54 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd-userdata-shm.mount: Deactivated successfully.
Feb  2 05:04:54 np0005604790 podman[260713]: 2026-02-02 10:04:54.949833828 +0000 UTC m=+0.114543477 container cleanup 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 05:04:54 np0005604790 systemd[1]: libpod-conmon-292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd.scope: Deactivated successfully.
Feb  2 05:04:55 np0005604790 podman[260742]: 2026-02-02 10:04:55.01991581 +0000 UTC m=+0.049221403 container remove 292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.025 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a01ef9f6-f234-4282-af13-63223bc77a3f]: (4, ('Mon Feb  2 10:04:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5 (292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd)\n292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd\nMon Feb  2 10:04:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5 (292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd)\n292c344a5afdeec053cbbd635b0d4db98e232dfa1d632677069891335c4709fd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.028 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[261d506b-1359-41fd-94ef-cad3c2ef084f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.029 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c51a04b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.031 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:55 np0005604790 kernel: tap2c51a04b-20: left promiscuous mode
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.035 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[29bfcaab-fe8d-4d47-99e9-09a14261c22b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.041 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.047 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[866fef76-bc0f-458b-9c2c-33eb00e7d32b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.048 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4a125a-0c9d-44c1-8816-83b1c5066ef0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.066 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[0047fe0e-a25c-4646-8a76-cb488d5b3adf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 385203, 'reachable_time': 23779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260757, 'error': None, 'target': 'ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.069 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2c51a04b-2353-4ec7-9aa3-a143234fb3c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:04:55 np0005604790 systemd[1]: run-netns-ovnmeta\x2d2c51a04b\x2d2353\x2d4ec7\x2d9aa3\x2da143234fb3c5.mount: Deactivated successfully.
Feb  2 05:04:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:04:55.069 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e613dd-9c73-488b-96d3-482a1bc065c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.083 252676 DEBUG nova.compute.manager [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.083 252676 DEBUG oslo_concurrency.lockutils [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.084 252676 DEBUG oslo_concurrency.lockutils [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.084 252676 DEBUG oslo_concurrency.lockutils [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.084 252676 DEBUG nova.compute.manager [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:04:55 np0005604790 nova_compute[252672]: 2026-02-02 10:04:55.084 252676 WARNING nova.compute.manager [req-2ecb6c5b-abe0-4277-aa72-4e0d178082d4 req-14f4bec1-3abf-4f0d-9eba-5daf0322d991 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:04:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400001b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:55.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:56.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.3 KiB/s wr, 0 op/s
Feb  2 05:04:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:56 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:04:57.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:04:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.403 252676 DEBUG nova.compute.manager [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-unplugged-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.404 252676 DEBUG oslo_concurrency.lockutils [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.404 252676 DEBUG oslo_concurrency.lockutils [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.404 252676 DEBUG oslo_concurrency.lockutils [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.404 252676 DEBUG nova.compute.manager [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-unplugged-9a348207-ae0a-4c8e-b379-80035923d778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:04:57 np0005604790 nova_compute[252672]: 2026-02-02 10:04:57.404 252676 WARNING nova.compute.manager [req-eeba319f-3735-4292-9136-43ce185f508b req-aa232e43-4d1a-4606-af7b-59cde7800c84 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-unplugged-9a348207-ae0a-4c8e-b379-80035923d778 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:04:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24000032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:57.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:58 np0005604790 nova_compute[252672]: 2026-02-02 10:04:58.054 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:04:58 np0005604790 nova_compute[252672]: 2026-02-02 10:04:58.055 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:04:58 np0005604790 nova_compute[252672]: 2026-02-02 10:04:58.055 252676 DEBUG nova.network.neutron [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:04:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:04:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:04:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:04:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 3.3 KiB/s wr, 1 op/s
Feb  2 05:04:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:58 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24080036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:59 np0005604790 ovn_controller[154631]: 2026-02-02T10:04:59Z|00053|binding|INFO|Releasing lport 7b523ab2-914d-4d5a-8cf0-5f452641a7fa from this chassis (sb_readonly=0)
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.110 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.263 252676 INFO nova.network.neutron [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Port 9a348207-ae0a-4c8e-b379-80035923d778 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.264 252676 DEBUG nova.network.neutron [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [{"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.286 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.313 252676 DEBUG oslo_concurrency.lockutils [None req-d36c38cb-6888-4343-91b0-8a11029a61ac 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "interface-3aba266a-af9d-4454-937a-ca3d562d7140-9a348207-ae0a-4c8e-b379-80035923d778" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.543 252676 DEBUG nova.compute.manager [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.544 252676 DEBUG oslo_concurrency.lockutils [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.544 252676 DEBUG oslo_concurrency.lockutils [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.545 252676 DEBUG oslo_concurrency.lockutils [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.545 252676 DEBUG nova.compute.manager [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.545 252676 WARNING nova.compute.manager [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-plugged-9a348207-ae0a-4c8e-b379-80035923d778 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.546 252676 DEBUG nova.compute.manager [req-f5a0766c-3141-490f-a8b0-07e417335065 req-6a29c693-9ce2-473e-a904-2cec11d5b555 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-deleted-9a348207-ae0a-4c8e-b379-80035923d778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.768 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:04:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c002580 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:04:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:04:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:04:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:04:59.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.917 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.932 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.933 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.933 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.933 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.934 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.935 252676 INFO nova.compute.manager [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Terminating instance#033[00m
Feb  2 05:04:59 np0005604790 nova_compute[252672]: 2026-02-02 10:04:59.936 252676 DEBUG nova.compute.manager [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 05:04:59 np0005604790 kernel: tape8aea164-d5 (unregistering): left promiscuous mode
Feb  2 05:05:00 np0005604790 NetworkManager[49024]: <info>  [1770026699.9997] device (tape8aea164-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:05:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:05:00Z|00054|binding|INFO|Releasing lport e8aea164-d544-4241-b141-038f3e866bd3 from this chassis (sb_readonly=0)
Feb  2 05:05:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:05:00Z|00055|binding|INFO|Setting lport e8aea164-d544-4241-b141-038f3e866bd3 down in Southbound
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.008 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:05:00Z|00056|binding|INFO|Removing iface tape8aea164-d5 ovn-installed in OVS
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.011 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.020 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:13:4a 10.100.0.12'], port_security=['fa:16:3e:fc:13:4a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3aba266a-af9d-4454-937a-ca3d562d7140', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43244da2-ad24-493a-be04-b3f920faba77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b4838c9-599e-43e1-a853-e98db3d912cf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=410d0273-b56a-4a25-b2e1-2c096529cc47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=e8aea164-d544-4241-b141-038f3e866bd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.022 165364 INFO neutron.agent.ovn.metadata.agent [-] Port e8aea164-d544-4241-b141-038f3e866bd3 in datapath 43244da2-ad24-493a-be04-b3f920faba77 unbound from our chassis#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.025 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43244da2-ad24-493a-be04-b3f920faba77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.025 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.027 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[081f02cd-469c-4ddd-a0dd-563c3c0846dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.028 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43244da2-ad24-493a-be04-b3f920faba77 namespace which is not needed anymore#033[00m
Feb  2 05:05:00 np0005604790 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb  2 05:05:00 np0005604790 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 14.161s CPU time.
Feb  2 05:05:00 np0005604790 systemd-machined[219024]: Machine qemu-2-instance-00000003 terminated.
Feb  2 05:05:00 np0005604790 podman[260789]: 2026-02-02 10:05:00.109897901 +0000 UTC m=+0.076050874 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.160 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.165 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.173 252676 INFO nova.virt.libvirt.driver [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Instance destroyed successfully.#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.173 252676 DEBUG nova.objects.instance [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'resources' on Instance uuid 3aba266a-af9d-4454-937a-ca3d562d7140 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.190 252676 DEBUG nova.virt.libvirt.vif [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1261930807',display_name='tempest-TestNetworkBasicOps-server-1261930807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1261930807',id=3,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB27e0nJHIK58Z3ZCFxu9LfpabnYsIVBFEzTEWF3c/gr064x+O+jWsENBE8Yz1U86qtq3lzG/toFN1TQQpYsp7FBgfLCmIDAeD2/jIiciHozTuGuu580hwCQmvhv9zCqYg==',key_name='tempest-TestNetworkBasicOps-1916874117',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-b5tafoys',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:04:21Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=3aba266a-af9d-4454-937a-ca3d562d7140,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.191 252676 DEBUG nova.network.os_vif_util [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "e8aea164-d544-4241-b141-038f3e866bd3", "address": "fa:16:3e:fc:13:4a", "network": {"id": "43244da2-ad24-493a-be04-b3f920faba77", "bridge": "br-int", "label": "tempest-network-smoke--1590532484", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape8aea164-d5", "ovs_interfaceid": "e8aea164-d544-4241-b141-038f3e866bd3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.192 252676 DEBUG nova.network.os_vif_util [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.193 252676 DEBUG os_vif [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:05:00 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [NOTICE]   (260326) : haproxy version is 2.8.14-c23fe91
Feb  2 05:05:00 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [NOTICE]   (260326) : path to executable is /usr/sbin/haproxy
Feb  2 05:05:00 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [WARNING]  (260326) : Exiting Master process...
Feb  2 05:05:00 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [ALERT]    (260326) : Current worker (260328) exited with code 143 (Terminated)
Feb  2 05:05:00 np0005604790 neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77[260322]: [WARNING]  (260326) : All workers exited. Exiting... (0)
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.199 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.200 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8aea164-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:05:00 np0005604790 systemd[1]: libpod-5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea.scope: Deactivated successfully.
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.203 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.206 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:05:00 np0005604790 podman[260834]: 2026-02-02 10:05:00.207843651 +0000 UTC m=+0.062686034 container died 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.208 252676 INFO os_vif [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:13:4a,bridge_name='br-int',has_traffic_filtering=True,id=e8aea164-d544-4241-b141-038f3e866bd3,network=Network(43244da2-ad24-493a-be04-b3f920faba77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape8aea164-d5')#033[00m
Feb  2 05:05:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea-userdata-shm.mount: Deactivated successfully.
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.239 252676 DEBUG nova.compute.manager [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-unplugged-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.240 252676 DEBUG oslo_concurrency.lockutils [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.240 252676 DEBUG oslo_concurrency.lockutils [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.240 252676 DEBUG oslo_concurrency.lockutils [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.241 252676 DEBUG nova.compute.manager [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-unplugged-e8aea164-d544-4241-b141-038f3e866bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:05:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0ba4a3e39263c7ff3e897de25fdc6191ba975a22f2ebb990e9405a0250301ac3-merged.mount: Deactivated successfully.
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.241 252676 DEBUG nova.compute.manager [req-7b370daf-15c7-4062-8749-9c3f63f41d89 req-f0ae7b4f-be94-4796-9ba2-a27665068aca b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-unplugged-e8aea164-d544-4241-b141-038f3e866bd3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 05:05:00 np0005604790 podman[260834]: 2026-02-02 10:05:00.247935648 +0000 UTC m=+0.102778041 container cleanup 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb  2 05:05:00 np0005604790 systemd[1]: libpod-conmon-5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea.scope: Deactivated successfully.
Feb  2 05:05:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:00.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:00 np0005604790 podman[260886]: 2026-02-02 10:05:00.334990476 +0000 UTC m=+0.061205135 container remove 5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.342 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[b29505e4-5629-4dd4-92a2-773a22834452]: (4, ('Mon Feb  2 10:05:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77 (5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea)\n5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea\nMon Feb  2 10:05:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43244da2-ad24-493a-be04-b3f920faba77 (5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea)\n5892545e09ff63d32f1c7ec9110e59db366efbde31ece064fa879ab0c83544ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.344 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[64cf00e7-4eee-4434-955b-89b1d2514bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.345 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43244da2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.348 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 kernel: tap43244da2-a0: left promiscuous mode
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.353 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.357 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[033607cf-faee-4203-98ff-e5885fc5ced5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.374 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb603f0-fd72-4845-a8f7-6928c0368039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.377 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[b057a8ba-a53b-4587-9758-7ab501b20507]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.394 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[8973b9a4-b1b8-488b-afec-e36f722e41c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 382015, 'reachable_time': 39158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260908, 'error': None, 'target': 'ovnmeta-43244da2-ad24-493a-be04-b3f920faba77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 systemd[1]: run-netns-ovnmeta\x2d43244da2\x2dad24\x2d493a\x2dbe04\x2db3f920faba77.mount: Deactivated successfully.
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.399 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43244da2-ad24-493a-be04-b3f920faba77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:05:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:00.399 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[c6930675-e206-4547-9e2f-ecebeba1f0c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:05:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1023 B/s wr, 0 op/s
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.689 252676 INFO nova.virt.libvirt.driver [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Deleting instance files /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140_del#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.691 252676 INFO nova.virt.libvirt.driver [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Deletion of /var/lib/nova/instances/3aba266a-af9d-4454-937a-ca3d562d7140_del complete#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.745 252676 INFO nova.compute.manager [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.746 252676 DEBUG oslo.service.loopingcall [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.747 252676 DEBUG nova.compute.manager [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 05:05:00 np0005604790 nova_compute[252672]: 2026-02-02 10:05:00.747 252676 DEBUG nova.network.neutron [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 05:05:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:00 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24000032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24080036e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:01 np0005604790 nova_compute[252672]: 2026-02-02 10:05:01.828 252676 DEBUG nova.compute.manager [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-changed-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:05:01 np0005604790 nova_compute[252672]: 2026-02-02 10:05:01.829 252676 DEBUG nova.compute.manager [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing instance network info cache due to event network-changed-e8aea164-d544-4241-b141-038f3e866bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:05:01 np0005604790 nova_compute[252672]: 2026-02-02 10:05:01.829 252676 DEBUG oslo_concurrency.lockutils [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:05:01 np0005604790 nova_compute[252672]: 2026-02-02 10:05:01.830 252676 DEBUG oslo_concurrency.lockutils [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:05:01 np0005604790 nova_compute[252672]: 2026-02-02 10:05:01.830 252676 DEBUG nova.network.neutron [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Refreshing network info cache for port e8aea164-d544-4241-b141-038f3e866bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:05:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:01.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.031 252676 INFO nova.network.neutron [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Port e8aea164-d544-4241-b141-038f3e866bd3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.032 252676 DEBUG nova.network.neutron [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.059 252676 DEBUG oslo_concurrency.lockutils [req-37687445-6ae1-445b-9f0c-315413d508c8 req-9364c8f9-5b28-4c23-b755-a6ce614cdd57 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-3aba266a-af9d-4454-937a-ca3d562d7140" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.072 252676 DEBUG nova.network.neutron [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.083 252676 INFO nova.compute.manager [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Took 1.34 seconds to deallocate network for instance.#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.118 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.119 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.175 252676 DEBUG oslo_concurrency.processutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:05:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:05:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.293 252676 DEBUG nova.compute.manager [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.294 252676 DEBUG oslo_concurrency.lockutils [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.295 252676 DEBUG oslo_concurrency.lockutils [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.295 252676 DEBUG oslo_concurrency.lockutils [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.295 252676 DEBUG nova.compute.manager [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] No waiting events found dispatching network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.296 252676 WARNING nova.compute.manager [req-40d10815-01d3-47cd-bc7a-1ad41ce04887 req-a597a90d-26ef-4257-b309-5c07c300cfde b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received unexpected event network-vif-plugged-e8aea164-d544-4241-b141-038f3e866bd3 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 05:05:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1023 B/s wr, 0 op/s
Feb  2 05:05:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:05:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3041023965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.635 252676 DEBUG oslo_concurrency.processutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.641 252676 DEBUG nova.compute.provider_tree [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.662 252676 DEBUG nova.scheduler.client.report [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.693 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.720 252676 INFO nova.scheduler.client.report [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Deleted allocations for instance 3aba266a-af9d-4454-937a-ca3d562d7140#033[00m
Feb  2 05:05:02 np0005604790 nova_compute[252672]: 2026-02-02 10:05:02.824 252676 DEBUG oslo_concurrency.lockutils [None req-6d4744c5-fae0-42e6-abe4-826aefe93732 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "3aba266a-af9d-4454-937a-ca3d562d7140" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:02 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:03.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:03 np0005604790 nova_compute[252672]: 2026-02-02 10:05:03.920 252676 DEBUG nova.compute.manager [req-5022b131-d68f-48e0-95ba-119e31386b89 req-b88b4a8f-6def-4226-a6f9-95f045004b5d b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Received event network-vif-deleted-e8aea164-d544-4241-b141-038f3e866bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:05:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Feb  2 05:05:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:04 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:05:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:05:04 np0005604790 nova_compute[252672]: 2026-02-02 10:05:04.916 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.202 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.304 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.304 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.305 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.305 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.305 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:05:05 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:05:05 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277977808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.771 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:05:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:05.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.917 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:05 np0005604790 nova_compute[252672]: 2026-02-02 10:05:05.928 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.006 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.007 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4524MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.007 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.008 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.066 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.067 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.087 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:05:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:06.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:05:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462922132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.568 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:05:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.575 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.592 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.626 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:05:06 np0005604790 nova_compute[252672]: 2026-02-02 10:05:06.627 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:06 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:07.113Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:05:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.623 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.623 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.624 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.624 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.651 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.651 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.652 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.652 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:07 np0005604790 nova_compute[252672]: 2026-02-02 10:05:07.653 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:05:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:07.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:08 np0005604790 nova_compute[252672]: 2026-02-02 10:05:08.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Feb  2 05:05:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:08 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:09 np0005604790 nova_compute[252672]: 2026-02-02 10:05:09.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:09 np0005604790 nova_compute[252672]: 2026-02-02 10:05:09.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:05:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:09.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:09 np0005604790 nova_compute[252672]: 2026-02-02 10:05:09.922 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:05:10 np0005604790 nova_compute[252672]: 2026-02-02 10:05:10.203 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:10.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.659935957 +0000 UTC m=+0.068116080 container create 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:05:10 np0005604790 systemd[1]: Started libpod-conmon-8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa.scope.
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.62428118 +0000 UTC m=+0.032461393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:05:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.755200975 +0000 UTC m=+0.163381158 container init 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.76130837 +0000 UTC m=+0.169488523 container start 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.765341498 +0000 UTC m=+0.173521721 container attach 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:05:10 np0005604790 focused_golick[261177]: 167 167
Feb  2 05:05:10 np0005604790 systemd[1]: libpod-8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa.scope: Deactivated successfully.
Feb  2 05:05:10 np0005604790 conmon[261177]: conmon 8a0e019e5d408adb314d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa.scope/container/memory.events
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.768587385 +0000 UTC m=+0.176767568 container died 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:05:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c4185c06f8c41a19c12e7c5f64437802cfe8fe0a7253e7cc53e1cf772d6e3a6c-merged.mount: Deactivated successfully.
Feb  2 05:05:10 np0005604790 podman[261161]: 2026-02-02 10:05:10.816761409 +0000 UTC m=+0.224941532 container remove 8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:05:10 np0005604790 systemd[1]: libpod-conmon-8a0e019e5d408adb314d2f5d0a5a4829709021d399c79b63a11442e1d5cbccfa.scope: Deactivated successfully.
Feb  2 05:05:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:10 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.0011464 +0000 UTC m=+0.060010762 container create ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:05:11 np0005604790 systemd[1]: Started libpod-conmon-ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6.scope.
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:10.978915203 +0000 UTC m=+0.037779535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.109366747 +0000 UTC m=+0.168231079 container init ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.125711226 +0000 UTC m=+0.184575558 container start ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.133785852 +0000 UTC m=+0.192650164 container attach ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 05:05:11 np0005604790 eager_pasteur[261218]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:05:11 np0005604790 eager_pasteur[261218]: --> All data devices are unavailable
Feb  2 05:05:11 np0005604790 systemd[1]: libpod-ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6.scope: Deactivated successfully.
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.54590463 +0000 UTC m=+0.604768962 container died ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 05:05:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fb04e16491e5501bb596d7c1aeac5fd536ad3bbc0e529f1ded7ed2d4003a913f-merged.mount: Deactivated successfully.
Feb  2 05:05:11 np0005604790 podman[261202]: 2026-02-02 10:05:11.592482161 +0000 UTC m=+0.651346463 container remove ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_pasteur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 05:05:11 np0005604790 systemd[1]: libpod-conmon-ae1ce87239db181607ab7287732963afc034280b748c05b473bbdde6630e3bc6.scope: Deactivated successfully.
Feb  2 05:05:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.518897278 +0000 UTC m=+0.046572681 container create a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:05:12 np0005604790 systemd[1]: Started libpod-conmon-a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1.scope.
Feb  2 05:05:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Feb  2 05:05:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.499049915 +0000 UTC m=+0.026725398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.596507553 +0000 UTC m=+0.124182976 container init a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.604084746 +0000 UTC m=+0.131760189 container start a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.608752512 +0000 UTC m=+0.136427945 container attach a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:12 np0005604790 condescending_mirzakhani[261357]: 167 167
Feb  2 05:05:12 np0005604790 systemd[1]: libpod-a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1.scope: Deactivated successfully.
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.610788546 +0000 UTC m=+0.138463979 container died a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:05:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1735ac7f43b1ed1e0eb6cc9ed43165d6c45e6fdc0aea60f283687cad6c4f8837-merged.mount: Deactivated successfully.
Feb  2 05:05:12 np0005604790 podman[261341]: 2026-02-02 10:05:12.656047812 +0000 UTC m=+0.183723205 container remove a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 05:05:12 np0005604790 systemd[1]: libpod-conmon-a200ee15ccb403dca737ffc06a5b2f30cf2892c4bef7757f71ab6931c0c639b1.scope: Deactivated successfully.
Feb  2 05:05:12 np0005604790 podman[261381]: 2026-02-02 10:05:12.821107364 +0000 UTC m=+0.056256211 container create 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 05:05:12 np0005604790 systemd[1]: Started libpod-conmon-6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46.scope.
Feb  2 05:05:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:12 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f1a142d5c7f759fdfe89bf2f1f5645906b5b00d7830f742fb3d94aacdf99faf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f1a142d5c7f759fdfe89bf2f1f5645906b5b00d7830f742fb3d94aacdf99faf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f1a142d5c7f759fdfe89bf2f1f5645906b5b00d7830f742fb3d94aacdf99faf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f1a142d5c7f759fdfe89bf2f1f5645906b5b00d7830f742fb3d94aacdf99faf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:12 np0005604790 podman[261381]: 2026-02-02 10:05:12.79935487 +0000 UTC m=+0.034503717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:12 np0005604790 podman[261381]: 2026-02-02 10:05:12.911255515 +0000 UTC m=+0.146404392 container init 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:12 np0005604790 podman[261381]: 2026-02-02 10:05:12.924592703 +0000 UTC m=+0.159741540 container start 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:05:12 np0005604790 podman[261381]: 2026-02-02 10:05:12.928701904 +0000 UTC m=+0.163850741 container attach 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]: {
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:    "1": [
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:        {
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "devices": [
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "/dev/loop3"
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            ],
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "lv_name": "ceph_lv0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "lv_size": "21470642176",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "name": "ceph_lv0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "tags": {
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.cluster_name": "ceph",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.crush_device_class": "",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.encrypted": "0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.osd_id": "1",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.type": "block",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.vdo": "0",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:                "ceph.with_tpm": "0"
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            },
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "type": "block",
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:            "vg_name": "ceph_vg0"
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:        }
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]:    ]
Feb  2 05:05:13 np0005604790 vigilant_maxwell[261397]: }
Feb  2 05:05:13 np0005604790 systemd[1]: libpod-6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46.scope: Deactivated successfully.
Feb  2 05:05:13 np0005604790 podman[261381]: 2026-02-02 10:05:13.201570822 +0000 UTC m=+0.436719649 container died 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:05:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9f1a142d5c7f759fdfe89bf2f1f5645906b5b00d7830f742fb3d94aacdf99faf-merged.mount: Deactivated successfully.
Feb  2 05:05:13 np0005604790 podman[261381]: 2026-02-02 10:05:13.249459618 +0000 UTC m=+0.484608465 container remove 6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 05:05:13 np0005604790 systemd[1]: libpod-conmon-6a2354fd34a955ad6b8af90d882844a06a224d46d1a19f69b35974149afe5b46.scope: Deactivated successfully.
Feb  2 05:05:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.873567528 +0000 UTC m=+0.053341313 container create a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:13.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:13 np0005604790 systemd[1]: Started libpod-conmon-a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5.scope.
Feb  2 05:05:13 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.944816041 +0000 UTC m=+0.124589906 container init a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.851534306 +0000 UTC m=+0.031308101 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.951980394 +0000 UTC m=+0.131754199 container start a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 05:05:13 np0005604790 vibrant_chaum[261531]: 167 167
Feb  2 05:05:13 np0005604790 systemd[1]: libpod-a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5.scope: Deactivated successfully.
Feb  2 05:05:13 np0005604790 conmon[261531]: conmon a8ce04befee1d252c322 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5.scope/container/memory.events
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.956431143 +0000 UTC m=+0.136204998 container attach a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.956887926 +0000 UTC m=+0.136661761 container died a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 05:05:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f2ddaf894eafc87e7d871482f0c9435cf0ddc8f92b80b25aa614d80c9f88ce24-merged.mount: Deactivated successfully.
Feb  2 05:05:13 np0005604790 podman[261510]: 2026-02-02 10:05:13.9969013 +0000 UTC m=+0.176675115 container remove a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:14 np0005604790 systemd[1]: libpod-conmon-a8ce04befee1d252c322e90a6304898438b0531026abdc779cf58e43d2e6eed5.scope: Deactivated successfully.
Feb  2 05:05:14 np0005604790 podman[261555]: 2026-02-02 10:05:14.164138431 +0000 UTC m=+0.045081001 container create 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:05:14 np0005604790 systemd[1]: Started libpod-conmon-1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5.scope.
Feb  2 05:05:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:05:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507bb2a5fcecfcefbb6c4cd6ae81ef901822ed586591ae877cf2198bca27418/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507bb2a5fcecfcefbb6c4cd6ae81ef901822ed586591ae877cf2198bca27418/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507bb2a5fcecfcefbb6c4cd6ae81ef901822ed586591ae877cf2198bca27418/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c507bb2a5fcecfcefbb6c4cd6ae81ef901822ed586591ae877cf2198bca27418/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:05:14 np0005604790 podman[261555]: 2026-02-02 10:05:14.142926192 +0000 UTC m=+0.023868762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:05:14 np0005604790 podman[261555]: 2026-02-02 10:05:14.254723144 +0000 UTC m=+0.135665684 container init 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 05:05:14 np0005604790 podman[261555]: 2026-02-02 10:05:14.268631168 +0000 UTC m=+0.149573708 container start 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:05:14 np0005604790 podman[261555]: 2026-02-02 10:05:14.271302439 +0000 UTC m=+0.152244969 container attach 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:05:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Feb  2 05:05:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:14 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:05:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:05:14 np0005604790 nova_compute[252672]: 2026-02-02 10:05:14.947 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:15 np0005604790 lvm[261646]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:05:15 np0005604790 lvm[261646]: VG ceph_vg0 finished
Feb  2 05:05:15 np0005604790 amazing_tharp[261572]: {}
Feb  2 05:05:15 np0005604790 systemd[1]: libpod-1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5.scope: Deactivated successfully.
Feb  2 05:05:15 np0005604790 systemd[1]: libpod-1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5.scope: Consumed 1.323s CPU time.
Feb  2 05:05:15 np0005604790 podman[261555]: 2026-02-02 10:05:15.122729104 +0000 UTC m=+1.003671684 container died 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:05:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c507bb2a5fcecfcefbb6c4cd6ae81ef901822ed586591ae877cf2198bca27418-merged.mount: Deactivated successfully.
Feb  2 05:05:15 np0005604790 nova_compute[252672]: 2026-02-02 10:05:15.170 252676 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770026700.1695182, 3aba266a-af9d-4454-937a-ca3d562d7140 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:05:15 np0005604790 nova_compute[252672]: 2026-02-02 10:05:15.171 252676 INFO nova.compute.manager [-] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] VM Stopped (Lifecycle Event)#033[00m
Feb  2 05:05:15 np0005604790 podman[261555]: 2026-02-02 10:05:15.172024648 +0000 UTC m=+1.052967218 container remove 1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_tharp, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:05:15 np0005604790 systemd[1]: libpod-conmon-1e82dd906331a03ef550e0ed8b5965616ae537b09495eaf4cadf00b95f2debb5.scope: Deactivated successfully.
Feb  2 05:05:15 np0005604790 nova_compute[252672]: 2026-02-02 10:05:15.195 252676 DEBUG nova.compute.manager [None req-e6afbc42-9aea-4531-8730-fca467c91bc1 - - - - - -] [instance: 3aba266a-af9d-4454-937a-ca3d562d7140] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:05:15 np0005604790 nova_compute[252672]: 2026-02-02 10:05:15.206 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:05:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:05:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:15 np0005604790 nova_compute[252672]: 2026-02-02 10:05:15.703 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:15 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:15.703 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:05:15 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:15.705 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:05:15 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:15.706 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:05:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:16 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:16 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:05:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:16 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:05:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:17.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:05:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:17.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:05:17
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'images', 'backups']
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:05:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:05:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:05:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:05:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:17.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:18 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:19.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:19 np0005604790 nova_compute[252672]: 2026-02-02 10:05:19.949 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:20 np0005604790 nova_compute[252672]: 2026-02-02 10:05:20.207 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:20 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:22.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:22 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:23 np0005604790 podman[261718]: 2026-02-02 10:05:23.393602314 +0000 UTC m=+0.107493256 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb  2 05:05:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:05:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:24] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Feb  2 05:05:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:24] "GET /metrics HTTP/1.1" 200 48440 "" "Prometheus/2.51.0"
Feb  2 05:05:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:24 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:24 np0005604790 nova_compute[252672]: 2026-02-02 10:05:24.987 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:25 np0005604790 nova_compute[252672]: 2026-02-02 10:05:25.209 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:26 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:27.115Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:05:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:27.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:05:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:28.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:28 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:30 np0005604790 nova_compute[252672]: 2026-02-02 10:05:30.028 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:30 np0005604790 nova_compute[252672]: 2026-02-02 10:05:30.210 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:30 np0005604790 podman[261753]: 2026-02-02 10:05:30.340382866 +0000 UTC m=+0.056058325 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 05:05:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:05:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:05:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:05:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:32 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f242400a640 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb  2 05:05:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:34] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Feb  2 05:05:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:34] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Feb  2 05:05:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:34 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:35 np0005604790 nova_compute[252672]: 2026-02-02 10:05:35.061 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:35 np0005604790 nova_compute[252672]: 2026-02-02 10:05:35.213 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:35.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb  2 05:05:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:36 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:37.116Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:05:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:37 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:37 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb  2 05:05:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100538 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:05:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:38 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:39 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:39 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:40 np0005604790 nova_compute[252672]: 2026-02-02 10:05:40.063 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:40 np0005604790 nova_compute[252672]: 2026-02-02 10:05:40.214 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb  2 05:05:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:40 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:40 np0005604790 ovn_controller[154631]: 2026-02-02T10:05:40Z|00057|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Feb  2 05:05:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:41 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:41 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:41.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb  2 05:05:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb  2 05:05:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:44] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Feb  2 05:05:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:44] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Feb  2 05:05:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:44 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428002950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:45 np0005604790 nova_compute[252672]: 2026-02-02 10:05:45.064 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:45 np0005604790 nova_compute[252672]: 2026-02-02 10:05:45.215 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:45.377 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:05:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:45.378 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:05:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:05:45.378 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:05:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:45.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:05:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:46 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:47.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:05:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:05:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:05:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:05:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 109 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 101 op/s
Feb  2 05:05:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:48 : epoch 698076ae : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:05:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:48 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:50 np0005604790 nova_compute[252672]: 2026-02-02 10:05:50.066 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:50 np0005604790 nova_compute[252672]: 2026-02-02 10:05:50.217 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:50.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 109 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Feb  2 05:05:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:50 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:05:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:05:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:05:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:51.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:52.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 109 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Feb  2 05:05:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:52 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:53.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:54 np0005604790 podman[261825]: 2026-02-02 10:05:54.363864551 +0000 UTC m=+0.085069462 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb  2 05:05:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Feb  2 05:05:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:54 : epoch 698076ae : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 05:05:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:54] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Feb  2 05:05:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:05:54] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Feb  2 05:05:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:54 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:55 np0005604790 nova_compute[252672]: 2026-02-02 10:05:55.069 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:55 np0005604790 nova_compute[252672]: 2026-02-02 10:05:55.220 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:05:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:55.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:05:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:05:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb  2 05:05:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:56 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:05:57.119Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:05:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:05:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:57.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:05:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:05:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:05:58.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:05:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Feb  2 05:05:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:58 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2408004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:05:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:05:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:05:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:05:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:05:59.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:00 np0005604790 nova_compute[252672]: 2026-02-02 10:06:00.071 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:00 np0005604790 nova_compute[252672]: 2026-02-02 10:06:00.240 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 109 KiB/s wr, 41 op/s
Feb  2 05:06:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100600 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:06:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:00 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:01 np0005604790 podman[261882]: 2026-02-02 10:06:01.373184588 +0000 UTC m=+0.082315889 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb  2 05:06:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24080041a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:02.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 109 KiB/s wr, 41 op/s
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.714751) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762714805, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 904, "num_deletes": 251, "total_data_size": 1426417, "memory_usage": 1449008, "flush_reason": "Manual Compaction"}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762734304, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1411299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23878, "largest_seqno": 24781, "table_properties": {"data_size": 1406868, "index_size": 2083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10028, "raw_average_key_size": 19, "raw_value_size": 1397892, "raw_average_value_size": 2751, "num_data_blocks": 93, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026687, "oldest_key_time": 1770026687, "file_creation_time": 1770026762, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 19645 microseconds, and 7370 cpu microseconds.
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.734392) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1411299 bytes OK
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.734427) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.737575) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.737600) EVENT_LOG_v1 {"time_micros": 1770026762737592, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.737631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1422130, prev total WAL file size 1422130, number of live WAL files 2.
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.738300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1378KB)], [53(12MB)]
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762738376, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14532628, "oldest_snapshot_seqno": -1}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5404 keys, 12364269 bytes, temperature: kUnknown
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762846604, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12364269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12328677, "index_size": 20954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138881, "raw_average_key_size": 25, "raw_value_size": 12231417, "raw_average_value_size": 2263, "num_data_blocks": 851, "num_entries": 5404, "num_filter_entries": 5404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026762, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.846994) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12364269 bytes
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.848565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.2 rd, 114.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(19.1) write-amplify(8.8) OK, records in: 5920, records dropped: 516 output_compression: NoCompression
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.848597) EVENT_LOG_v1 {"time_micros": 1770026762848582, "job": 28, "event": "compaction_finished", "compaction_time_micros": 108330, "compaction_time_cpu_micros": 36672, "output_level": 6, "num_output_files": 1, "total_output_size": 12364269, "num_input_records": 5920, "num_output_records": 5404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762848964, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026762852003, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.738161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.852078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.852086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.852089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.852092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:06:02.852095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:06:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:02 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:04.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 112 KiB/s wr, 41 op/s
Feb  2 05:06:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:04] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:06:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:04] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:06:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:04 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24080041c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:05 np0005604790 nova_compute[252672]: 2026-02-02 10:06:05.074 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:05 np0005604790 nova_compute[252672]: 2026-02-02 10:06:05.276 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:05.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.035 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.036 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.063 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.163 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.164 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.174 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.174 252676 INFO nova.compute.claims [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.289 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.309 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb  2 05:06:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:06:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515216054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.794 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.802 252676 DEBUG nova.compute.provider_tree [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.823 252676 DEBUG nova.scheduler.client.report [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.856 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.857 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.862 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.862 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.863 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.863 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:06 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.927 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.928 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.948 252676 INFO nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 05:06:06 np0005604790 nova_compute[252672]: 2026-02-02 10:06:06.972 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.077 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.080 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.081 252676 INFO nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Creating image(s)#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.119 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:07.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.161 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.203 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.209 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.288 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.289 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.290 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.291 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.328 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.335 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:06:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3091021254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.392 252676 DEBUG nova.policy [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b1695a2a70d4aa0aa350ba17d8f6d5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.413 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.613 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.614 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4542MB free_disk=59.94271469116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.615 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.615 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.628 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.716 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] resizing rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.755 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Instance 987bf707-685e-40f6-9dc2-ff3b606ae75d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.755 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.755 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.783 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.853 252676 DEBUG nova.objects.instance [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'migration_context' on Instance uuid 987bf707-685e-40f6-9dc2-ff3b606ae75d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.867 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.867 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Ensure instance console log exists: /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.868 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.868 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:07 np0005604790 nova_compute[252672]: 2026-02-02 10:06:07.868 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:06:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238960955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.306 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.313 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.347 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.387 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.388 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:08 np0005604790 nova_compute[252672]: 2026-02-02 10:06:08.408 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Successfully created port: 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 05:06:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb  2 05:06:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:08 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.385 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.386 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.420 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.421 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.421 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.446 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.446 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.447 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.447 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.448 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.448 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.448 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.562 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Successfully updated port: 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.580 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.581 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.581 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.686 252676 DEBUG nova.compute.manager [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-changed-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.686 252676 DEBUG nova.compute.manager [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Refreshing instance network info cache due to event network-changed-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.687 252676 DEBUG oslo_concurrency.lockutils [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:06:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:09 np0005604790 nova_compute[252672]: 2026-02-02 10:06:09.919 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 05:06:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.083 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.279 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:06:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.704 252676 DEBUG nova.network.neutron [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updating instance_info_cache with network_info: [{"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.779 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.780 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Instance network_info: |[{"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.780 252676 DEBUG oslo_concurrency.lockutils [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.781 252676 DEBUG nova.network.neutron [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Refreshing network info cache for port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.788 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Start _get_guest_xml network_info=[{"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd5e062d7-95ef-409c-9ad0-60f7cf6f44ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.795 252676 WARNING nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.802 252676 DEBUG nova.virt.libvirt.host [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.803 252676 DEBUG nova.virt.libvirt.host [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.810 252676 DEBUG nova.virt.libvirt.host [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.811 252676 DEBUG nova.virt.libvirt.host [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.812 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.812 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T10:01:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1194feb9-e285-414e-825a-1e77171d092f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.813 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.813 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.814 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.814 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.815 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.815 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.816 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.816 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.816 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.817 252676 DEBUG nova.virt.hardware [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 05:06:10 np0005604790 nova_compute[252672]: 2026-02-02 10:06:10.821 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:10 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:06:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008810149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.294 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.328 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.334 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:06:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097599874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.790 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.792 252676 DEBUG nova.virt.libvirt.vif [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436631524',display_name='tempest-TestNetworkBasicOps-server-436631524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436631524',id=5,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6QwWRtLBNr+UQfzZxNs6ZP/B/DQvIFD/YTMSNUgPWkHplcARrJygwNu7Ke89LNPkLCTWUicv/Q6AJ2Dn3lPN3cul0jZxrwDYu6LTNn2NgLviv6U0QMXJRYiNuHVK7BLg==',key_name='tempest-TestNetworkBasicOps-1945155213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-t2c8tszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:06:07Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=987bf707-685e-40f6-9dc2-ff3b606ae75d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.792 252676 DEBUG nova.network.os_vif_util [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.793 252676 DEBUG nova.network.os_vif_util [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:06:11 np0005604790 nova_compute[252672]: 2026-02-02 10:06:11.793 252676 DEBUG nova.objects.instance [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 987bf707-685e-40f6-9dc2-ff3b606ae75d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:06:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24040016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.349 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] End _get_guest_xml xml=<domain type="kvm">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <uuid>987bf707-685e-40f6-9dc2-ff3b606ae75d</uuid>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <name>instance-00000005</name>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <memory>131072</memory>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <vcpu>1</vcpu>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:name>tempest-TestNetworkBasicOps-server-436631524</nova:name>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:creationTime>2026-02-02 10:06:10</nova:creationTime>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:flavor name="m1.nano">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:memory>128</nova:memory>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:disk>1</nova:disk>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:swap>0</nova:swap>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:vcpus>1</nova:vcpus>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </nova:flavor>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:owner>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </nova:owner>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <nova:ports>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <nova:port uuid="3053bb96-bda8-4bde-ab6b-d64a2b4bb32a">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        </nova:port>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </nova:ports>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </nova:instance>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <sysinfo type="smbios">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="manufacturer">RDO</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="product">OpenStack Compute</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="serial">987bf707-685e-40f6-9dc2-ff3b606ae75d</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="uuid">987bf707-685e-40f6-9dc2-ff3b606ae75d</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <entry name="family">Virtual Machine</entry>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <boot dev="hd"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <smbios mode="sysinfo"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <vmcoreinfo/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <clock offset="utc">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <timer name="hpet" present="no"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <cpu mode="host-model" match="exact">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <disk type="network" device="disk">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/987bf707-685e-40f6-9dc2-ff3b606ae75d_disk">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <target dev="vda" bus="virtio"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <disk type="network" device="cdrom">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <target dev="sda" bus="sata"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <interface type="ethernet">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <mac address="fa:16:3e:d0:42:cd"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <mtu size="1442"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <target dev="tap3053bb96-bd"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <serial type="pty">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <log file="/var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/console.log" append="off"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <input type="tablet" bus="usb"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <rng model="virtio">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <backend model="random">/dev/urandom</backend>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <controller type="usb" index="0"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    <memballoon model="virtio">
Feb  2 05:06:12 np0005604790 nova_compute[252672]:      <stats period="10"/>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:06:12 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:06:12 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:06:12 np0005604790 nova_compute[252672]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.350 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Preparing to wait for external event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.350 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.350 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.351 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.351 252676 DEBUG nova.virt.libvirt.vif [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436631524',display_name='tempest-TestNetworkBasicOps-server-436631524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436631524',id=5,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6QwWRtLBNr+UQfzZxNs6ZP/B/DQvIFD/YTMSNUgPWkHplcARrJygwNu7Ke89LNPkLCTWUicv/Q6AJ2Dn3lPN3cul0jZxrwDYu6LTNn2NgLviv6U0QMXJRYiNuHVK7BLg==',key_name='tempest-TestNetworkBasicOps-1945155213',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-t2c8tszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:06:07Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=987bf707-685e-40f6-9dc2-ff3b606ae75d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.352 252676 DEBUG nova.network.os_vif_util [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.352 252676 DEBUG nova.network.os_vif_util [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.352 252676 DEBUG os_vif [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.353 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.353 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.354 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.357 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.357 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3053bb96-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.357 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3053bb96-bd, col_values=(('external_ids', {'iface-id': '3053bb96-bda8-4bde-ab6b-d64a2b4bb32a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:42:cd', 'vm-uuid': '987bf707-685e-40f6-9dc2-ff3b606ae75d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.359 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:12 np0005604790 NetworkManager[49024]: <info>  [1770026772.3600] manager: (tap3053bb96-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.361 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.366 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.367 252676 INFO os_vif [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd')#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.424 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.424 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.424 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:d0:42:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.425 252676 INFO nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Using config drive#033[00m
Feb  2 05:06:12 np0005604790 nova_compute[252672]: 2026-02-02 10:06:12.449 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:12.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 167 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:06:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:12 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.623 252676 INFO nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Creating config drive at /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config#033[00m
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.633 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcm31e3_3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.764 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpcm31e3_3" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.794 252676 DEBUG nova.storage.rbd_utils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.798 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24040016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.971 252676 DEBUG oslo_concurrency.processutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config 987bf707-685e-40f6-9dc2-ff3b606ae75d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:13 np0005604790 nova_compute[252672]: 2026-02-02 10:06:13.972 252676 INFO nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Deleting local config drive /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d/disk.config because it was imported into RBD.#033[00m
Feb  2 05:06:14 np0005604790 kernel: tap3053bb96-bd: entered promiscuous mode
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.0300] manager: (tap3053bb96-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Feb  2 05:06:14 np0005604790 systemd-udevd[262282]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:06:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:14Z|00058|binding|INFO|Claiming lport 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a for this chassis.
Feb  2 05:06:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:14Z|00059|binding|INFO|3053bb96-bda8-4bde-ab6b-d64a2b4bb32a: Claiming fa:16:3e:d0:42:cd 10.100.0.5
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.069 252676 DEBUG nova.network.neutron [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updated VIF entry in instance network info cache for port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.070 252676 DEBUG nova.network.neutron [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updating instance_info_cache with network_info: [{"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.073 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.0855] device (tap3053bb96-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.0877] device (tap3053bb96-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.087 252676 DEBUG oslo_concurrency.lockutils [req-1d13a78c-b2c4-4410-902b-772a9b80e525 req-28c67dde-ebd8-4d71-b222-a509669d6d5f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.094 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:42:cd 10.100.0.5'], port_security=['fa:16:3e:d0:42:cd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '987bf707-685e-40f6-9dc2-ff3b606ae75d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35995cfa-61b9-4083-b048-5f2b7642c470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51a69684-d1b7-4c65-b997-4dcb2e8a8e05, chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.096 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a in datapath a66b06ac-62ee-43ce-a46e-36641cc6c6b6 bound to our chassis#033[00m
Feb  2 05:06:14 np0005604790 systemd-machined[219024]: New machine qemu-3-instance-00000005.
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.098 165364 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a66b06ac-62ee-43ce-a46e-36641cc6c6b6#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.111 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[10326407-4f36-487a-b526-172484804494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.113 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa66b06ac-61 in ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 05:06:14 np0005604790 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.115 257524 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa66b06ac-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.115 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fdf525-e375-42b7-99ed-b1fd762eff83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:14Z|00060|binding|INFO|Setting lport 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a ovn-installed in OVS
Feb  2 05:06:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:14Z|00061|binding|INFO|Setting lport 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a up in Southbound
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.117 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[61fbf79f-13c4-485d-bc90-85c3781e19e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.118 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.131 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[f7239940-817a-4b75-a4d3-e434d0cbbb1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.156 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[1182c9f5-97b6-4cff-bbe7-738a32ebe9d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.185 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[a83aceb4-51b8-4997-84df-a97c6c155568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.192 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[2e90e528-8afb-4037-8c85-c9150e9c6929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.1938] manager: (tapa66b06ac-60): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.232 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b1207f-036d-4c88-bbad-5e8f867a95e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.235 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[655a2dfb-f9df-4486-8653-0e23ab3acc5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.2590] device (tapa66b06ac-60): carrier: link connected
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.263 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[4f43369e-b8eb-4a8f-81de-12dbb902f0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.278 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[21606fe4-c552-4951-81a9-b1ae212b2fe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66b06ac-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:af:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393354, 'reachable_time': 40949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262318, 'error': None, 'target': 'ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.290 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[0687a317-a2ee-42b7-9b31-bfc936b8a178]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:af53'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393354, 'tstamp': 393354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262319, 'error': None, 'target': 'ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.308 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[49b76276-35b6-42a6-a52c-bbe37fa40df5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa66b06ac-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:af:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393354, 'reachable_time': 40949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262320, 'error': None, 'target': 'ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.338 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe60eb7-dc2a-450d-97bc-edbd954566e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.400 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e419a6b7-de2a-4cf4-a912-ef7bb60c1e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.402 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66b06ac-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.402 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.403 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa66b06ac-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.406 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 kernel: tapa66b06ac-60: entered promiscuous mode
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.410 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.412 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa66b06ac-60, col_values=(('external_ids', {'iface-id': 'fd4da63a-4612-4fcd-8e65-a88f24118a15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:14 np0005604790 NetworkManager[49024]: <info>  [1770026774.4123] manager: (tapa66b06ac-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.413 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:14Z|00062|binding|INFO|Releasing lport fd4da63a-4612-4fcd-8e65-a88f24118a15 from this chassis (sb_readonly=0)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.414 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.415 165364 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a66b06ac-62ee-43ce-a46e-36641cc6c6b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a66b06ac-62ee-43ce-a46e-36641cc6c6b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.417 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[c13232d6-b417-4d29-8add-21a0937dbcef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.418 165364 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: global
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    log         /dev/log local0 debug
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    log-tag     haproxy-metadata-proxy-a66b06ac-62ee-43ce-a46e-36641cc6c6b6
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    user        root
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    group       root
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    maxconn     1024
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    pidfile     /var/lib/neutron/external/pids/a66b06ac-62ee-43ce-a46e-36641cc6c6b6.pid.haproxy
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    daemon
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: defaults
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    log global
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    mode http
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    option httplog
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    option dontlognull
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    option http-server-close
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    option forwardfor
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    retries                 3
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    timeout http-request    30s
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    timeout connect         30s
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    timeout client          32s
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    timeout server          32s
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    timeout http-keep-alive 30s
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: listen listener
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    bind 169.254.169.254:80
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]:    http-request add-header X-OVN-Network-ID a66b06ac-62ee-43ce-a46e-36641cc6c6b6
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.420 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:14.421 165364 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'env', 'PROCESS_TAG=haproxy-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a66b06ac-62ee-43ce-a46e-36641cc6c6b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 05:06:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.752 252676 DEBUG nova.compute.manager [req-003785de-f26a-4460-a654-06290df604e1 req-61274582-c95e-41b6-a4ad-41eb73968c93 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.753 252676 DEBUG oslo_concurrency.lockutils [req-003785de-f26a-4460-a654-06290df604e1 req-61274582-c95e-41b6-a4ad-41eb73968c93 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.753 252676 DEBUG oslo_concurrency.lockutils [req-003785de-f26a-4460-a654-06290df604e1 req-61274582-c95e-41b6-a4ad-41eb73968c93 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.754 252676 DEBUG oslo_concurrency.lockutils [req-003785de-f26a-4460-a654-06290df604e1 req-61274582-c95e-41b6-a4ad-41eb73968c93 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.755 252676 DEBUG nova.compute.manager [req-003785de-f26a-4460-a654-06290df604e1 req-61274582-c95e-41b6-a4ad-41eb73968c93 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Processing event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 05:06:14 np0005604790 podman[262386]: 2026-02-02 10:06:14.8548217 +0000 UTC m=+0.075296613 container create fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 05:06:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:14] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:06:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:14] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:06:14 np0005604790 systemd[1]: Started libpod-conmon-fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de.scope.
Feb  2 05:06:14 np0005604790 podman[262386]: 2026-02-02 10:06:14.805060913 +0000 UTC m=+0.025535916 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 05:06:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:14 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.906 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.908 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026774.9056501, 987bf707-685e-40f6-9dc2-ff3b606ae75d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.908 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] VM Started (Lifecycle Event)#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.912 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 05:06:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.917 252676 INFO nova.virt.libvirt.driver [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Instance spawned successfully.#033[00m
Feb  2 05:06:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f52626465591042b1cfc847674c8ad730cb20d3e8cc9bf45258c3dcf858b388/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.918 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 05:06:14 np0005604790 podman[262386]: 2026-02-02 10:06:14.93453336 +0000 UTC m=+0.155008293 container init fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.936 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:06:14 np0005604790 podman[262386]: 2026-02-02 10:06:14.939885182 +0000 UTC m=+0.160360085 container start fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.944 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.948 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.948 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.949 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.949 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.949 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.950 252676 DEBUG nova.virt.libvirt.driver [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:06:14 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [NOTICE]   (262410) : New worker (262412) forked
Feb  2 05:06:14 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [NOTICE]   (262410) : Loading success.
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.979 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.980 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026774.9077067, 987bf707-685e-40f6-9dc2-ff3b606ae75d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:06:14 np0005604790 nova_compute[252672]: 2026-02-02 10:06:14.980 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] VM Paused (Lifecycle Event)#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.027 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.032 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026774.911445, 987bf707-685e-40f6-9dc2-ff3b606ae75d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.032 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] VM Resumed (Lifecycle Event)#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.043 252676 INFO nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.044 252676 DEBUG nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.055 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.060 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.084 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.088 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.124 252676 INFO nova.compute.manager [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Took 9.00 seconds to build instance.#033[00m
Feb  2 05:06:15 np0005604790 nova_compute[252672]: 2026-02-02 10:06:15.144 252676 DEBUG oslo_concurrency.lockutils [None req-af2d6dd4-0dea-43ab-b4f7-92aeae5bc4d8 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:16 np0005604790 podman[262548]: 2026-02-02 10:06:16.242357934 +0000 UTC m=+0.063360428 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:06:16 np0005604790 podman[262548]: 2026-02-02 10:06:16.348059602 +0000 UTC m=+0.169062096 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:06:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:16.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.839 252676 DEBUG nova.compute.manager [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.839 252676 DEBUG oslo_concurrency.lockutils [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.839 252676 DEBUG oslo_concurrency.lockutils [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.839 252676 DEBUG oslo_concurrency.lockutils [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.840 252676 DEBUG nova.compute.manager [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] No waiting events found dispatching network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:06:16 np0005604790 nova_compute[252672]: 2026-02-02 10:06:16.840 252676 WARNING nova.compute.manager [req-328d414f-b049-468f-a1fa-7e1c8d5b9119 req-f7e8ebac-80a4-4b5a-8569-27b7c64c8f0b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received unexpected event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a for instance with vm_state active and task_state None.#033[00m
Feb  2 05:06:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:16 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24040016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:16 np0005604790 podman[262691]: 2026-02-02 10:06:16.94307661 +0000 UTC m=+0.056897267 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 05:06:16 np0005604790 podman[262691]: 2026-02-02 10:06:16.953792754 +0000 UTC m=+0.067613411 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 05:06:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:17.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:06:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:17.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:06:17
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['backups', '.nfs', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:06:17 np0005604790 podman[262755]: 2026-02-02 10:06:17.183463703 +0000 UTC m=+0.062901536 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:06:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:06:17 np0005604790 podman[262755]: 2026-02-02 10:06:17.188303261 +0000 UTC m=+0.067741074 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:17 np0005604790 nova_compute[252672]: 2026-02-02 10:06:17.360 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:06:17 np0005604790 podman[262828]: 2026-02-02 10:06:17.480523735 +0000 UTC m=+0.068887314 container exec 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:06:17 np0005604790 podman[262828]: 2026-02-02 10:06:17.490803007 +0000 UTC m=+0.079166556 container exec_died 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011055244554600508 of space, bias 1.0, pg target 0.3316573366380153 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:06:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:06:17 np0005604790 podman[262913]: 2026-02-02 10:06:17.764128412 +0000 UTC m=+0.054769671 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64)
Feb  2 05:06:17 np0005604790 podman[262913]: 2026-02-02 10:06:17.802533628 +0000 UTC m=+0.093174887 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, io.buildah.version=1.28.2, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64)
Feb  2 05:06:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:18 np0005604790 podman[262976]: 2026-02-02 10:06:18.047632245 +0000 UTC m=+0.067656792 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:18 np0005604790 podman[262976]: 2026-02-02 10:06:18.078959144 +0000 UTC m=+0.098983641 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:18 np0005604790 podman[263050]: 2026-02-02 10:06:18.340721762 +0000 UTC m=+0.076041304 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 05:06:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:18 np0005604790 podman[263050]: 2026-02-02 10:06:18.49554881 +0000 UTC m=+0.230868351 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 05:06:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:06:18 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:18Z|00063|binding|INFO|Releasing lport fd4da63a-4612-4fcd-8e65-a88f24118a15 from this chassis (sb_readonly=0)
Feb  2 05:06:18 np0005604790 NetworkManager[49024]: <info>  [1770026778.7052] manager: (patch-br-int-to-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Feb  2 05:06:18 np0005604790 NetworkManager[49024]: <info>  [1770026778.7063] manager: (patch-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb  2 05:06:18 np0005604790 nova_compute[252672]: 2026-02-02 10:06:18.704 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:18 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:18Z|00064|binding|INFO|Releasing lport fd4da63a-4612-4fcd-8e65-a88f24118a15 from this chassis (sb_readonly=0)
Feb  2 05:06:18 np0005604790 nova_compute[252672]: 2026-02-02 10:06:18.721 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:18 np0005604790 nova_compute[252672]: 2026-02-02 10:06:18.730 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:18 np0005604790 podman[263140]: 2026-02-02 10:06:18.898358402 +0000 UTC m=+0.080582794 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:18 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:18 np0005604790 podman[263140]: 2026-02-02 10:06:18.954251791 +0000 UTC m=+0.136476123 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:06:19 np0005604790 nova_compute[252672]: 2026-02-02 10:06:19.012 252676 DEBUG nova.compute.manager [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-changed-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:19 np0005604790 nova_compute[252672]: 2026-02-02 10:06:19.014 252676 DEBUG nova.compute.manager [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Refreshing instance network info cache due to event network-changed-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:06:19 np0005604790 nova_compute[252672]: 2026-02-02 10:06:19.014 252676 DEBUG oslo_concurrency.lockutils [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:06:19 np0005604790 nova_compute[252672]: 2026-02-02 10:06:19.015 252676 DEBUG oslo_concurrency.lockutils [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:06:19 np0005604790 nova_compute[252672]: 2026-02-02 10:06:19.017 252676 DEBUG nova.network.neutron [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Refreshing network info cache for port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:06:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:06:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:20 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:06:20 np0005604790 nova_compute[252672]: 2026-02-02 10:06:20.087 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.31741475 +0000 UTC m=+0.067290642 container create 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:06:20 np0005604790 systemd[1]: Started libpod-conmon-0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0.scope.
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.290561119 +0000 UTC m=+0.040437061 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.413072861 +0000 UTC m=+0.162948793 container init 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.41867886 +0000 UTC m=+0.168554762 container start 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.422650885 +0000 UTC m=+0.172526757 container attach 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:06:20 np0005604790 nervous_burnell[263400]: 167 167
Feb  2 05:06:20 np0005604790 systemd[1]: libpod-0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0.scope: Deactivated successfully.
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.427503863 +0000 UTC m=+0.177379745 container died 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:06:20 np0005604790 systemd[1]: var-lib-containers-storage-overlay-232066fa5e269bb4df3767a7814c73254bc3c29ba21576b2e755732ba7de787a-merged.mount: Deactivated successfully.
Feb  2 05:06:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:20 np0005604790 podman[263383]: 2026-02-02 10:06:20.470402799 +0000 UTC m=+0.220278661 container remove 0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_burnell, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:06:20 np0005604790 systemd[1]: libpod-conmon-0261fe9fce7fa556ffd3d7972f07dbb6f396a3f318d614d5be0bd93da757dff0.scope: Deactivated successfully.
Feb  2 05:06:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:06:20 np0005604790 podman[263424]: 2026-02-02 10:06:20.629977552 +0000 UTC m=+0.062255038 container create 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:06:20 np0005604790 systemd[1]: Started libpod-conmon-16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863.scope.
Feb  2 05:06:20 np0005604790 podman[263424]: 2026-02-02 10:06:20.607730303 +0000 UTC m=+0.040007789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:20 np0005604790 podman[263424]: 2026-02-02 10:06:20.772044712 +0000 UTC m=+0.204322218 container init 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:06:20 np0005604790 podman[263424]: 2026-02-02 10:06:20.779374016 +0000 UTC m=+0.211651482 container start 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 05:06:20 np0005604790 podman[263424]: 2026-02-02 10:06:20.783539337 +0000 UTC m=+0.215816893 container attach 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 05:06:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:20 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:21 np0005604790 serene_gagarin[263441]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:06:21 np0005604790 serene_gagarin[263441]: --> All data devices are unavailable
Feb  2 05:06:21 np0005604790 systemd[1]: libpod-16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863.scope: Deactivated successfully.
Feb  2 05:06:21 np0005604790 podman[263456]: 2026-02-02 10:06:21.227667912 +0000 UTC m=+0.025716882 container died 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:06:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1dc916f3b45ca7ba5072c1c408319c84691b845b1cf12da0683c25f83cffce13-merged.mount: Deactivated successfully.
Feb  2 05:06:21 np0005604790 podman[263456]: 2026-02-02 10:06:21.301470885 +0000 UTC m=+0.099519815 container remove 16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 05:06:21 np0005604790 systemd[1]: libpod-conmon-16e671ac6f45e5132cc244b9b19c8d7621a464371802739445761971b4e34863.scope: Deactivated successfully.
Feb  2 05:06:21 np0005604790 nova_compute[252672]: 2026-02-02 10:06:21.772 252676 DEBUG nova.network.neutron [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updated VIF entry in instance network info cache for port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:06:21 np0005604790 nova_compute[252672]: 2026-02-02 10:06:21.775 252676 DEBUG nova.network.neutron [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updating instance_info_cache with network_info: [{"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:06:21 np0005604790 nova_compute[252672]: 2026-02-02 10:06:21.802 252676 DEBUG oslo_concurrency.lockutils [req-4c0dd241-a27b-47a2-aa98-38364bea74ed req-0db3f3cd-e1e0-457e-a60a-21ae1b40e8d2 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-987bf707-685e-40f6-9dc2-ff3b606ae75d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:06:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:21 np0005604790 podman[263558]: 2026-02-02 10:06:21.910998688 +0000 UTC m=+0.049022769 container create 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 05:06:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:21 np0005604790 systemd[1]: Started libpod-conmon-7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe.scope.
Feb  2 05:06:21 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:21.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:21 np0005604790 podman[263558]: 2026-02-02 10:06:21.887203168 +0000 UTC m=+0.025227279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:21 np0005604790 podman[263558]: 2026-02-02 10:06:21.999437949 +0000 UTC m=+0.137462090 container init 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:06:22 np0005604790 podman[263558]: 2026-02-02 10:06:22.009576627 +0000 UTC m=+0.147600688 container start 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 05:06:22 np0005604790 practical_swartz[263576]: 167 167
Feb  2 05:06:22 np0005604790 podman[263558]: 2026-02-02 10:06:22.017709392 +0000 UTC m=+0.155733463 container attach 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:06:22 np0005604790 systemd[1]: libpod-7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe.scope: Deactivated successfully.
Feb  2 05:06:22 np0005604790 podman[263558]: 2026-02-02 10:06:22.018544554 +0000 UTC m=+0.156568635 container died 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb  2 05:06:22 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6c6c1f8ae6af11db40042e5674fcf7e51ce5d1f9425585cbc4037726839dc0c7-merged.mount: Deactivated successfully.
Feb  2 05:06:22 np0005604790 podman[263558]: 2026-02-02 10:06:22.079023045 +0000 UTC m=+0.217047116 container remove 7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:06:22 np0005604790 systemd[1]: libpod-conmon-7042f8c66cb35e2bee6f898a94ad16418614fb24622d7042051851477f8d9fbe.scope: Deactivated successfully.
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.236238186 +0000 UTC m=+0.056453425 container create 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:06:22 np0005604790 systemd[1]: Started libpod-conmon-6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff.scope.
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.214752497 +0000 UTC m=+0.034967786 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:22 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefca2c30a27aa79707f4cee21e06574eee189ffe82ec9a07c5eba90ce704ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefca2c30a27aa79707f4cee21e06574eee189ffe82ec9a07c5eba90ce704ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefca2c30a27aa79707f4cee21e06574eee189ffe82ec9a07c5eba90ce704ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:22 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eefca2c30a27aa79707f4cee21e06574eee189ffe82ec9a07c5eba90ce704ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.359634392 +0000 UTC m=+0.179849731 container init 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Feb  2 05:06:22 np0005604790 nova_compute[252672]: 2026-02-02 10:06:22.363 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.370328585 +0000 UTC m=+0.190543864 container start 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.381051079 +0000 UTC m=+0.201266408 container attach 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:06:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]: {
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:    "1": [
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:        {
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "devices": [
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "/dev/loop3"
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            ],
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "lv_name": "ceph_lv0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "lv_size": "21470642176",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "name": "ceph_lv0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "tags": {
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.cluster_name": "ceph",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.crush_device_class": "",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.encrypted": "0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.osd_id": "1",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.type": "block",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.vdo": "0",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:                "ceph.with_tpm": "0"
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            },
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "type": "block",
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:            "vg_name": "ceph_vg0"
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:        }
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]:    ]
Feb  2 05:06:22 np0005604790 relaxed_grothendieck[263618]: }
Feb  2 05:06:22 np0005604790 systemd[1]: libpod-6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff.scope: Deactivated successfully.
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.686595896 +0000 UTC m=+0.506811135 container died 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:06:22 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5eefca2c30a27aa79707f4cee21e06574eee189ffe82ec9a07c5eba90ce704ff-merged.mount: Deactivated successfully.
Feb  2 05:06:22 np0005604790 podman[263600]: 2026-02-02 10:06:22.769095749 +0000 UTC m=+0.589310988 container remove 6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:06:22 np0005604790 systemd[1]: libpod-conmon-6502746ecfed871b016f672cb739dbcb7e06d5da78649f17f9e28f07d823a5ff.scope: Deactivated successfully.
Feb  2 05:06:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:22 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.32945033 +0000 UTC m=+0.051478914 container create a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 05:06:23 np0005604790 systemd[1]: Started libpod-conmon-a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a.scope.
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.304724375 +0000 UTC m=+0.026753039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:23 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.58410799 +0000 UTC m=+0.306136644 container init a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.591262659 +0000 UTC m=+0.313291233 container start a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:06:23 np0005604790 amazing_lovelace[263749]: 167 167
Feb  2 05:06:23 np0005604790 systemd[1]: libpod-a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a.scope: Deactivated successfully.
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.61698749 +0000 UTC m=+0.339016144 container attach a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.61813218 +0000 UTC m=+0.340160754 container died a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:06:23 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d973906c3a94ce687d80bb084e681e2d0072426ae496820aad66b024ba8ae435-merged.mount: Deactivated successfully.
Feb  2 05:06:23 np0005604790 podman[263732]: 2026-02-02 10:06:23.679763071 +0000 UTC m=+0.401791665 container remove a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lovelace, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Feb  2 05:06:23 np0005604790 systemd[1]: libpod-conmon-a07fbfa923a139dba74e00c0fe10dd70d1bc8e885ef4021c04944058a8a7741a.scope: Deactivated successfully.
Feb  2 05:06:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:23 np0005604790 podman[263775]: 2026-02-02 10:06:23.864349047 +0000 UTC m=+0.067901398 container create e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 05:06:23 np0005604790 systemd[1]: Started libpod-conmon-e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283.scope.
Feb  2 05:06:23 np0005604790 podman[263775]: 2026-02-02 10:06:23.829661029 +0000 UTC m=+0.033213410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:06:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:23 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:06:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba0e28d8162ed43e40d6e0d6cee6bdd3673350059dbb8e4a9993411a917b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba0e28d8162ed43e40d6e0d6cee6bdd3673350059dbb8e4a9993411a917b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba0e28d8162ed43e40d6e0d6cee6bdd3673350059dbb8e4a9993411a917b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:23 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba0e28d8162ed43e40d6e0d6cee6bdd3673350059dbb8e4a9993411a917b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:06:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:23 np0005604790 podman[263775]: 2026-02-02 10:06:23.984428435 +0000 UTC m=+0.187980826 container init e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:06:23 np0005604790 podman[263775]: 2026-02-02 10:06:23.990328652 +0000 UTC m=+0.193881003 container start e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:06:24 np0005604790 podman[263775]: 2026-02-02 10:06:24.000942182 +0000 UTC m=+0.204494583 container attach e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 05:06:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:24.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb  2 05:06:24 np0005604790 lvm[263873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:06:24 np0005604790 lvm[263873]: VG ceph_vg0 finished
Feb  2 05:06:24 np0005604790 unruffled_bassi[263792]: {}
Feb  2 05:06:24 np0005604790 systemd[1]: libpod-e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283.scope: Deactivated successfully.
Feb  2 05:06:24 np0005604790 systemd[1]: libpod-e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283.scope: Consumed 1.057s CPU time.
Feb  2 05:06:24 np0005604790 podman[263775]: 2026-02-02 10:06:24.770549612 +0000 UTC m=+0.974101973 container died e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:06:24 np0005604790 podman[263865]: 2026-02-02 10:06:24.789346439 +0000 UTC m=+0.151987323 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Feb  2 05:06:24 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0c6ba0e28d8162ed43e40d6e0d6cee6bdd3673350059dbb8e4a9993411a917b2-merged.mount: Deactivated successfully.
Feb  2 05:06:24 np0005604790 podman[263775]: 2026-02-02 10:06:24.818301896 +0000 UTC m=+1.021854247 container remove e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:06:24 np0005604790 systemd[1]: libpod-conmon-e9c281a7fd737bea73f4880dbee92bc5829424e270a4380455b598122e641283.scope: Deactivated successfully.
Feb  2 05:06:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:24] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb  2 05:06:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:24] "GET /metrics HTTP/1.1" 200 48463 "" "Prometheus/2.51.0"
Feb  2 05:06:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:06:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:06:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:24 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:25 np0005604790 nova_compute[252672]: 2026-02-02 10:06:25.089 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:25 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:06:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:25.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb  2 05:06:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:26 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:06:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:27.123Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:06:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:27.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:06:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:27 np0005604790 nova_compute[252672]: 2026-02-02 10:06:27.398 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:27 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:27Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:42:cd 10.100.0.5
Feb  2 05:06:27 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:27Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:42:cd 10.100.0.5
Feb  2 05:06:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:27.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:28.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb  2 05:06:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:28 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:30 np0005604790 nova_compute[252672]: 2026-02-02 10:06:30.091 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:06:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.119 252676 INFO nova.compute.manager [None req-94e9e7ae-27f9-4d59-89cc-6e515b63a064 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Get console output#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.127 258300 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb  2 05:06:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:06:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:06:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:32 np0005604790 podman[263942]: 2026-02-02 10:06:32.366917537 +0000 UTC m=+0.070212149 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.401 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:32.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.502 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.503 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.503 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.503 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.504 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.505 252676 INFO nova.compute.manager [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Terminating instance#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.507 252676 DEBUG nova.compute.manager [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 05:06:32 np0005604790 kernel: tap3053bb96-bd (unregistering): left promiscuous mode
Feb  2 05:06:32 np0005604790 NetworkManager[49024]: <info>  [1770026792.5707] device (tap3053bb96-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.577 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:32 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:32Z|00065|binding|INFO|Releasing lport 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a from this chassis (sb_readonly=0)
Feb  2 05:06:32 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:32Z|00066|binding|INFO|Setting lport 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a down in Southbound
Feb  2 05:06:32 np0005604790 ovn_controller[154631]: 2026-02-02T10:06:32Z|00067|binding|INFO|Removing iface tap3053bb96-bd ovn-installed in OVS
Feb  2 05:06:32 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:32.585 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:42:cd 10.100.0.5'], port_security=['fa:16:3e:d0:42:cd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '987bf707-685e-40f6-9dc2-ff3b606ae75d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35995cfa-61b9-4083-b048-5f2b7642c470', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51a69684-d1b7-4c65-b997-4dcb2e8a8e05, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:06:32 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:32.587 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 3053bb96-bda8-4bde-ab6b-d64a2b4bb32a in datapath a66b06ac-62ee-43ce-a46e-36641cc6c6b6 unbound from our chassis#033[00m
Feb  2 05:06:32 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:32.588 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a66b06ac-62ee-43ce-a46e-36641cc6c6b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:06:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.632 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:32 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:32.631 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8cdbf1-b800-4acf-93cf-e16e1b516a2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:32 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:32.633 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6 namespace which is not needed anymore#033[00m
Feb  2 05:06:32 np0005604790 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb  2 05:06:32 np0005604790 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 13.465s CPU time.
Feb  2 05:06:32 np0005604790 systemd-machined[219024]: Machine qemu-3-instance-00000005 terminated.
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.745 252676 INFO nova.virt.libvirt.driver [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Instance destroyed successfully.#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.746 252676 DEBUG nova.objects.instance [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'resources' on Instance uuid 987bf707-685e-40f6-9dc2-ff3b606ae75d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [NOTICE]   (262410) : haproxy version is 2.8.14-c23fe91
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [NOTICE]   (262410) : path to executable is /usr/sbin/haproxy
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [WARNING]  (262410) : Exiting Master process...
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [WARNING]  (262410) : Exiting Master process...
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [ALERT]    (262410) : Current worker (262412) exited with code 143 (Terminated)
Feb  2 05:06:32 np0005604790 neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6[262406]: [WARNING]  (262410) : All workers exited. Exiting... (0)
Feb  2 05:06:32 np0005604790 systemd[1]: libpod-fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de.scope: Deactivated successfully.
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.762 252676 DEBUG nova.virt.libvirt.vif [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:06:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-436631524',display_name='tempest-TestNetworkBasicOps-server-436631524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-436631524',id=5,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD6QwWRtLBNr+UQfzZxNs6ZP/B/DQvIFD/YTMSNUgPWkHplcARrJygwNu7Ke89LNPkLCTWUicv/Q6AJ2Dn3lPN3cul0jZxrwDYu6LTNn2NgLviv6U0QMXJRYiNuHVK7BLg==',key_name='tempest-TestNetworkBasicOps-1945155213',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:06:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-t2c8tszz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:06:15Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=987bf707-685e-40f6-9dc2-ff3b606ae75d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.763 252676 DEBUG nova.network.os_vif_util [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "address": "fa:16:3e:d0:42:cd", "network": {"id": "a66b06ac-62ee-43ce-a46e-36641cc6c6b6", "bridge": "br-int", "label": "tempest-network-smoke--1210238442", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3053bb96-bd", "ovs_interfaceid": "3053bb96-bda8-4bde-ab6b-d64a2b4bb32a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.763 252676 DEBUG nova.network.os_vif_util [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.764 252676 DEBUG os_vif [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.765 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.766 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3053bb96-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.768 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:32 np0005604790 podman[263983]: 2026-02-02 10:06:32.768843625 +0000 UTC m=+0.051803982 container died fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.771 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:06:32 np0005604790 nova_compute[252672]: 2026-02-02 10:06:32.777 252676 INFO os_vif [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:42:cd,bridge_name='br-int',has_traffic_filtering=True,id=3053bb96-bda8-4bde-ab6b-d64a2b4bb32a,network=Network(a66b06ac-62ee-43ce-a46e-36641cc6c6b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3053bb96-bd')#033[00m
Feb  2 05:06:32 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de-userdata-shm.mount: Deactivated successfully.
Feb  2 05:06:32 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5f52626465591042b1cfc847674c8ad730cb20d3e8cc9bf45258c3dcf858b388-merged.mount: Deactivated successfully.
Feb  2 05:06:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:32 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:33 np0005604790 podman[263983]: 2026-02-02 10:06:33.017561978 +0000 UTC m=+0.300522365 container cleanup fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb  2 05:06:33 np0005604790 systemd[1]: libpod-conmon-fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de.scope: Deactivated successfully.
Feb  2 05:06:33 np0005604790 podman[264042]: 2026-02-02 10:06:33.092018409 +0000 UTC m=+0.054355200 container remove fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.096 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[504bd12f-e7c5-42e2-97ce-404a8656b268]: (4, ('Mon Feb  2 10:06:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6 (fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de)\nfce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de\nMon Feb  2 10:06:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6 (fce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de)\nfce384c9cc71fadf6f6a7a75f094acdb86dacdb1566ef5a7625a0e06e9ec99de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.098 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[749c3b68-f03b-4022-b377-3ea55cf85f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.099 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa66b06ac-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:33 np0005604790 kernel: tapa66b06ac-60: left promiscuous mode
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.102 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.106 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.109 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[5da44fe9-cabf-4dd3-8641-24deb1ed0338]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.123 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e824c918-810d-4631-9b9b-1465f56eafad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.124 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[123b8290-fe61-4568-8388-289765b12242]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.136 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0bedab-559b-4e07-863f-68ad11d2b411]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393346, 'reachable_time': 22306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264057, 'error': None, 'target': 'ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 systemd[1]: run-netns-ovnmeta\x2da66b06ac\x2d62ee\x2d43ce\x2da46e\x2d36641cc6c6b6.mount: Deactivated successfully.
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.139 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a66b06ac-62ee-43ce-a46e-36641cc6c6b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:06:33 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:33.140 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[36d4d073-1f59-4986-b6a0-e7f2cfa3460e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.426 252676 INFO nova.virt.libvirt.driver [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Deleting instance files /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d_del#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.427 252676 INFO nova.virt.libvirt.driver [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Deletion of /var/lib/nova/instances/987bf707-685e-40f6-9dc2-ff3b606ae75d_del complete#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.512 252676 INFO nova.compute.manager [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.512 252676 DEBUG oslo.service.loopingcall [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.513 252676 DEBUG nova.compute.manager [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 05:06:33 np0005604790 nova_compute[252672]: 2026-02-02 10:06:33.513 252676 DEBUG nova.network.neutron [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 05:06:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:34.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb  2 05:06:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:34] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Feb  2 05:06:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:34] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Feb  2 05:06:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:34 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004620 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:35 np0005604790 nova_compute[252672]: 2026-02-02 10:06:35.122 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:36.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb  2 05:06:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:36 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:37.124Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:06:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:37.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:06:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.509 252676 DEBUG nova.compute.manager [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-unplugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.510 252676 DEBUG oslo_concurrency.lockutils [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.510 252676 DEBUG oslo_concurrency.lockutils [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.510 252676 DEBUG oslo_concurrency.lockutils [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.510 252676 DEBUG nova.compute.manager [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] No waiting events found dispatching network-vif-unplugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.510 252676 DEBUG nova.compute.manager [req-3a09cb23-4bbd-414b-bec9-3ff922f45eb3 req-784a2997-b41b-4e2f-b2f9-e60c00294bfe b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-unplugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 05:06:37 np0005604790 nova_compute[252672]: 2026-02-02 10:06:37.769 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:37 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24280047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:37 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:37.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:38 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:38.060 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.061 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:38 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:38.062 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.281 252676 DEBUG nova.network.neutron [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.302 252676 INFO nova.compute.manager [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Took 4.79 seconds to deallocate network for instance.#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.360 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.361 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.428 252676 DEBUG nova.compute.manager [req-c7dd6ff5-f95c-445a-87cb-3b00b52a139c req-0ced0846-0d35-4429-87cf-e2dc72479ec8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-deleted-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.438 252676 DEBUG oslo_concurrency.processutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:06:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 398 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Feb  2 05:06:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:38 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.989 252676 DEBUG oslo_concurrency.processutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:06:38 np0005604790 nova_compute[252672]: 2026-02-02 10:06:38.997 252676 DEBUG nova.compute.provider_tree [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.022 252676 DEBUG nova.scheduler.client.report [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.058 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.105 252676 INFO nova.scheduler.client.report [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Deleted allocations for instance 987bf707-685e-40f6-9dc2-ff3b606ae75d#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.182 252676 DEBUG oslo_concurrency.lockutils [None req-d3aed4ed-a8f2-4deb-a58f-96e74c166d88 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.600 252676 DEBUG nova.compute.manager [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.601 252676 DEBUG oslo_concurrency.lockutils [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.601 252676 DEBUG oslo_concurrency.lockutils [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.602 252676 DEBUG oslo_concurrency.lockutils [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "987bf707-685e-40f6-9dc2-ff3b606ae75d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.602 252676 DEBUG nova.compute.manager [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] No waiting events found dispatching network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:06:39 np0005604790 nova_compute[252672]: 2026-02-02 10:06:39.602 252676 WARNING nova.compute.manager [req-6a113c71-3630-441a-9a72-81c189aec398 req-71da6306-4d68-4cd6-8663-6b5d9fc434df b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Received unexpected event network-vif-plugged-3053bb96-bda8-4bde-ab6b-d64a2b4bb32a for instance with vm_state deleted and task_state None.#033[00m
Feb  2 05:06:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:39 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:39 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004870 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:39.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:40 np0005604790 nova_compute[252672]: 2026-02-02 10:06:40.127 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:06:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:40 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:41 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:41 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:06:42 np0005604790 nova_compute[252672]: 2026-02-02 10:06:42.805 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:42 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:43 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:44 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:44.065 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:06:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:44.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 41 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Feb  2 05:06:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:44] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Feb  2 05:06:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:44] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Feb  2 05:06:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:44 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:45 np0005604790 nova_compute[252672]: 2026-02-02 10:06:45.174 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:45.378 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:06:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:45.378 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:06:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:06:45.379 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:06:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24280048b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:45 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:46.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb  2 05:06:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:46 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:47.125Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:06:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:06:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:06:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:06:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:47 np0005604790 nova_compute[252672]: 2026-02-02 10:06:47.742 252676 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770026792.7410448, 987bf707-685e-40f6-9dc2-ff3b606ae75d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:06:47 np0005604790 nova_compute[252672]: 2026-02-02 10:06:47.743 252676 INFO nova.compute.manager [-] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] VM Stopped (Lifecycle Event)#033[00m
Feb  2 05:06:47 np0005604790 nova_compute[252672]: 2026-02-02 10:06:47.766 252676 DEBUG nova.compute.manager [None req-6bb92137-252b-49e5-8749-f03dc48adbb0 - - - - - -] [instance: 987bf707-685e-40f6-9dc2-ff3b606ae75d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:06:47 np0005604790 nova_compute[252672]: 2026-02-02 10:06:47.827 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:47 np0005604790 nova_compute[252672]: 2026-02-02 10:06:47.829 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c001ff0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:47 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:48.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb  2 05:06:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:48 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24280048d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:49 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:50 np0005604790 nova_compute[252672]: 2026-02-02 10:06:50.232 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:50.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb  2 05:06:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:50 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24280048f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:51 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:52.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb  2 05:06:52 np0005604790 nova_compute[252672]: 2026-02-02 10:06:52.831 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:52 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:53 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:54.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb  2 05:06:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:54] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Feb  2 05:06:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:06:54] "GET /metrics HTTP/1.1" 200 48443 "" "Prometheus/2.51.0"
Feb  2 05:06:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:54 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2400003cd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:55 np0005604790 nova_compute[252672]: 2026-02-02 10:06:55.254 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:55 np0005604790 podman[264130]: 2026-02-02 10:06:55.399469117 +0000 UTC m=+0.104858336 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Feb  2 05:06:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:55 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:06:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:06:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:56.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:06:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:56 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:57.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:06:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:57.126Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:06:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:06:57.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:06:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:06:57 np0005604790 nova_compute[252672]: 2026-02-02 10:06:57.869 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:06:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:57 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:06:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:06:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:06:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:06:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:06:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:06:58.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:06:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:06:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:58 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:06:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:06:59 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:00 np0005604790 nova_compute[252672]: 2026-02-02 10:07:00.284 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:07:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:00 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:01 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:02.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:07:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:07:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:07:02 np0005604790 nova_compute[252672]: 2026-02-02 10:07:02.872 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:02 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:03 np0005604790 podman[264192]: 2026-02-02 10:07:03.342420214 +0000 UTC m=+0.059329491 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 05:07:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:03 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:07:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:07:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:07:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:04 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:05 np0005604790 nova_compute[252672]: 2026-02-02 10:07:05.329 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:05 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.328 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.329 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.330 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.331 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.331 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 88 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:07:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:07:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616455594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:07:06 np0005604790 nova_compute[252672]: 2026-02-02 10:07:06.901 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:06 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.087 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.088 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4557MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.089 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.089 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:07.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.167 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.168 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.181 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:07:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821679544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.610 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.616 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.634 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.657 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.658 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:07 np0005604790 nova_compute[252672]: 2026-02-02 10:07:07.875 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:07 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2428004990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:08.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb  2 05:07:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:08 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.658 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.658 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.658 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.659 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.671 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.672 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.672 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.672 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.672 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:09 np0005604790 nova_compute[252672]: 2026-02-02 10:07:09.672 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:07:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:09 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:10 np0005604790 nova_compute[252672]: 2026-02-02 10:07:10.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:10 np0005604790 nova_compute[252672]: 2026-02-02 10:07:10.332 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:10.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb  2 05:07:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:10 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:11 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:12 np0005604790 nova_compute[252672]: 2026-02-02 10:07:12.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:07:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:12.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb  2 05:07:12 np0005604790 nova_compute[252672]: 2026-02-02 10:07:12.878 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:12 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:13 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2424000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:07:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:07:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:07:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:14 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:15 np0005604790 nova_compute[252672]: 2026-02-02 10:07:15.369 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:15 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:16.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:16.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:07:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:16 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8002940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:17.129Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:07:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:17.129Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:07:17
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'vms', 'images', '.mgr', 'volumes', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:07:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:07:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:07:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:07:17 np0005604790 nova_compute[252672]: 2026-02-02 10:07:17.881 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:17 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.005000133s ======
Feb  2 05:07:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:18.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000133s
Feb  2 05:07:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb  2 05:07:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:18 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:19 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:20 np0005604790 nova_compute[252672]: 2026-02-02 10:07:20.372 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:20.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Feb  2 05:07:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:20 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:21 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:22.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:22.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Feb  2 05:07:22 np0005604790 nova_compute[252672]: 2026-02-02 10:07:22.883 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:22 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:23 np0005604790 ovn_controller[154631]: 2026-02-02T10:07:23Z|00068|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Feb  2 05:07:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:23 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24240094f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:24.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb  2 05:07:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:24] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb  2 05:07:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:24] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Feb  2 05:07:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:24 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:25 np0005604790 nova_compute[252672]: 2026-02-02 10:07:25.373 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:07:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:07:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:25 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:26.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:26 np0005604790 podman[264413]: 2026-02-02 10:07:26.084583877 +0000 UTC m=+0.124631750 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.488292273 +0000 UTC m=+0.091459972 container create 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.420628442 +0000 UTC m=+0.023796201 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:26 np0005604790 systemd[1]: Started libpod-conmon-71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594.scope.
Feb  2 05:07:26 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.667747922 +0000 UTC m=+0.270915661 container init 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.675456886 +0000 UTC m=+0.278624565 container start 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.679416561 +0000 UTC m=+0.282584260 container attach 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 05:07:26 np0005604790 silly_mahavira[264521]: 167 167
Feb  2 05:07:26 np0005604790 systemd[1]: libpod-71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594.scope: Deactivated successfully.
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.685218205 +0000 UTC m=+0.288385864 container died 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 05:07:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:07:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:07:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-de8a1eaf5da28a250e2786d24c9ac687092fba000f486e85e41d09fc99545d09-merged.mount: Deactivated successfully.
Feb  2 05:07:26 np0005604790 podman[264505]: 2026-02-02 10:07:26.732589588 +0000 UTC m=+0.335757267 container remove 71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 05:07:26 np0005604790 systemd[1]: libpod-conmon-71f645ac4d38894acaad1b948aeffc54b05bbbf8e6153c807015b9fd4c6dd594.scope: Deactivated successfully.
Feb  2 05:07:26 np0005604790 podman[264547]: 2026-02-02 10:07:26.887152049 +0000 UTC m=+0.051433352 container create 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:07:26 np0005604790 systemd[1]: Started libpod-conmon-143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3.scope.
Feb  2 05:07:26 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:26 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:26 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:26 np0005604790 podman[264547]: 2026-02-02 10:07:26.865089105 +0000 UTC m=+0.029370498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:26 np0005604790 podman[264547]: 2026-02-02 10:07:26.96236019 +0000 UTC m=+0.126641523 container init 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 05:07:26 np0005604790 podman[264547]: 2026-02-02 10:07:26.970096585 +0000 UTC m=+0.134377888 container start 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:07:26 np0005604790 podman[264547]: 2026-02-02 10:07:26.973178036 +0000 UTC m=+0.137459339 container attach 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 05:07:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:27.130Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:07:27 np0005604790 serene_lalande[264563]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:07:27 np0005604790 serene_lalande[264563]: --> All data devices are unavailable
Feb  2 05:07:27 np0005604790 systemd[1]: libpod-143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3.scope: Deactivated successfully.
Feb  2 05:07:27 np0005604790 podman[264547]: 2026-02-02 10:07:27.30166681 +0000 UTC m=+0.465948113 container died 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:27 np0005604790 systemd[1]: var-lib-containers-storage-overlay-317e493b6e27884b9450fabb3c8e7fbcb6da681bfa472b1989734e89633a673f-merged.mount: Deactivated successfully.
Feb  2 05:07:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:27 np0005604790 podman[264547]: 2026-02-02 10:07:27.355062164 +0000 UTC m=+0.519343507 container remove 143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_lalande, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:27 np0005604790 systemd[1]: libpod-conmon-143d71d0a5b3e799944ed0071aab8c4ce719b37b55af21030776b0cc308ed6c3.scope: Deactivated successfully.
Feb  2 05:07:27 np0005604790 nova_compute[252672]: 2026-02-02 10:07:27.885 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:27 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.017102955 +0000 UTC m=+0.055323165 container create 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 05:07:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:28 np0005604790 systemd[1]: Started libpod-conmon-47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31.scope.
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:27.995330299 +0000 UTC m=+0.033550549 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.115301114 +0000 UTC m=+0.153521334 container init 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.122593607 +0000 UTC m=+0.160813847 container start 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.126811769 +0000 UTC m=+0.165031989 container attach 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 05:07:28 np0005604790 friendly_dewdney[264702]: 167 167
Feb  2 05:07:28 np0005604790 systemd[1]: libpod-47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31.scope: Deactivated successfully.
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.129825129 +0000 UTC m=+0.168045379 container died 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:07:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-36fda9fa75d0affcebf44164d86e97106a0f952260be3de0a182ed7f51ab8afe-merged.mount: Deactivated successfully.
Feb  2 05:07:28 np0005604790 podman[264685]: 2026-02-02 10:07:28.172101218 +0000 UTC m=+0.210321458 container remove 47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_dewdney, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:28 np0005604790 systemd[1]: libpod-conmon-47b09009e692f0a21e73e4218c719c7011f143675005f1f830b767683fcd9e31.scope: Deactivated successfully.
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.346883534 +0000 UTC m=+0.045692471 container create 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:07:28 np0005604790 systemd[1]: Started libpod-conmon-9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c.scope.
Feb  2 05:07:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d258a0ae7ccb66251a77f9863c6907e6ddff41f9d0660668a1f6fda65e6ac93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d258a0ae7ccb66251a77f9863c6907e6ddff41f9d0660668a1f6fda65e6ac93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d258a0ae7ccb66251a77f9863c6907e6ddff41f9d0660668a1f6fda65e6ac93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d258a0ae7ccb66251a77f9863c6907e6ddff41f9d0660668a1f6fda65e6ac93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.33088145 +0000 UTC m=+0.029690407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.443420369 +0000 UTC m=+0.142229306 container init 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.451529043 +0000 UTC m=+0.150337980 container start 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.45555282 +0000 UTC m=+0.154361787 container attach 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:07:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:28.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:07:28 np0005604790 interesting_brown[264744]: {
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:    "1": [
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:        {
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "devices": [
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "/dev/loop3"
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            ],
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "lv_name": "ceph_lv0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "lv_size": "21470642176",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "name": "ceph_lv0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "tags": {
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.cluster_name": "ceph",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.crush_device_class": "",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.encrypted": "0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.osd_id": "1",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.type": "block",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.vdo": "0",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:                "ceph.with_tpm": "0"
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            },
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "type": "block",
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:            "vg_name": "ceph_vg0"
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:        }
Feb  2 05:07:28 np0005604790 interesting_brown[264744]:    ]
Feb  2 05:07:28 np0005604790 interesting_brown[264744]: }
Feb  2 05:07:28 np0005604790 systemd[1]: libpod-9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c.scope: Deactivated successfully.
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.776568146 +0000 UTC m=+0.475377083 container died 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 05:07:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9d258a0ae7ccb66251a77f9863c6907e6ddff41f9d0660668a1f6fda65e6ac93-merged.mount: Deactivated successfully.
Feb  2 05:07:28 np0005604790 podman[264728]: 2026-02-02 10:07:28.826017775 +0000 UTC m=+0.524826752 container remove 9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:07:28 np0005604790 systemd[1]: libpod-conmon-9c0898e0e19c65875de465bdafae64d62d05ffb07d154c8addabd87d6000b45c.scope: Deactivated successfully.
Feb  2 05:07:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:28 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.368811842 +0000 UTC m=+0.037544595 container create b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 05:07:29 np0005604790 systemd[1]: Started libpod-conmon-b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a.scope.
Feb  2 05:07:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.355570341 +0000 UTC m=+0.024303124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.465667815 +0000 UTC m=+0.134400598 container init b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.475374092 +0000 UTC m=+0.144106885 container start b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.479140482 +0000 UTC m=+0.147873255 container attach b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:07:29 np0005604790 modest_bartik[264875]: 167 167
Feb  2 05:07:29 np0005604790 systemd[1]: libpod-b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a.scope: Deactivated successfully.
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.483278031 +0000 UTC m=+0.152010824 container died b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:07:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8e7bdb84331e39dd68756c1b79f896045858091781cc0418c81e605a03d49f43-merged.mount: Deactivated successfully.
Feb  2 05:07:29 np0005604790 podman[264858]: 2026-02-02 10:07:29.531735694 +0000 UTC m=+0.200468447 container remove b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:07:29 np0005604790 systemd[1]: libpod-conmon-b939d7d678ac0837d7f2c3999d0f6d00345f30c12f5043f79df96f4deefaa60a.scope: Deactivated successfully.
Feb  2 05:07:29 np0005604790 podman[264899]: 2026-02-02 10:07:29.678446537 +0000 UTC m=+0.049336227 container create 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:07:29 np0005604790 systemd[1]: Started libpod-conmon-5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af.scope.
Feb  2 05:07:29 np0005604790 podman[264899]: 2026-02-02 10:07:29.654252586 +0000 UTC m=+0.025142356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:07:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd72bf9fc13638d2fa704f3321090a24647477fd57da5c9b6c096403f7ded87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd72bf9fc13638d2fa704f3321090a24647477fd57da5c9b6c096403f7ded87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd72bf9fc13638d2fa704f3321090a24647477fd57da5c9b6c096403f7ded87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:29 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd72bf9fc13638d2fa704f3321090a24647477fd57da5c9b6c096403f7ded87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:29 np0005604790 podman[264899]: 2026-02-02 10:07:29.774872249 +0000 UTC m=+0.145762019 container init 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Feb  2 05:07:29 np0005604790 podman[264899]: 2026-02-02 10:07:29.785269844 +0000 UTC m=+0.156159534 container start 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 05:07:29 np0005604790 podman[264899]: 2026-02-02 10:07:29.789764233 +0000 UTC m=+0.160653923 container attach 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:29 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:30 np0005604790 nova_compute[252672]: 2026-02-02 10:07:30.376 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:30 np0005604790 lvm[264991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:07:30 np0005604790 lvm[264991]: VG ceph_vg0 finished
Feb  2 05:07:30 np0005604790 recursing_wilbur[264916]: {}
Feb  2 05:07:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:30.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:30 np0005604790 systemd[1]: libpod-5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af.scope: Deactivated successfully.
Feb  2 05:07:30 np0005604790 systemd[1]: libpod-5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af.scope: Consumed 1.152s CPU time.
Feb  2 05:07:30 np0005604790 podman[264994]: 2026-02-02 10:07:30.62915803 +0000 UTC m=+0.025938688 container died 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:07:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:07:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4fd72bf9fc13638d2fa704f3321090a24647477fd57da5c9b6c096403f7ded87-merged.mount: Deactivated successfully.
Feb  2 05:07:30 np0005604790 podman[264994]: 2026-02-02 10:07:30.902099204 +0000 UTC m=+0.298879832 container remove 5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:30 np0005604790 systemd[1]: libpod-conmon-5edad7b1fbbd9bdc2e19fb5d94b4e703579b954f791d4c46ef75e51f9353a5af.scope: Deactivated successfully.
Feb  2 05:07:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:30 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24000010b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:07:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:07:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:31 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:32.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:07:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:07:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:07:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:32.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:07:32 np0005604790 nova_compute[252672]: 2026-02-02 10:07:32.891 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:32 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24000010b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:33 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:34.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:34 np0005604790 podman[265038]: 2026-02-02 10:07:34.361509655 +0000 UTC m=+0.067656392 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb  2 05:07:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:34.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:07:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:34] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:07:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:34] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:07:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:34 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2404003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:35 np0005604790 nova_compute[252672]: 2026-02-02 10:07:35.416 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f23f8004310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:35 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f24000010b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:07:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:36.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 16 KiB/s wr, 1 op/s
Feb  2 05:07:36 np0005604790 kernel: ganesha.nfsd[264065]: segfault at 50 ip 00007f24afd3532e sp 00007f24197f9210 error 4 in libntirpc.so.5.8[7f24afd1a000+2c000] likely on CPU 1 (core 0, socket 1)
Feb  2 05:07:36 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 05:07:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[260462]: 02/02/2026 10:07:36 : epoch 698076ae : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f240c0036a0 fd 38 proxy ignored for local
Feb  2 05:07:36 np0005604790 systemd[1]: Started Process Core Dump (PID 265059/UID 0).
Feb  2 05:07:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:37.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:07:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:37 np0005604790 nova_compute[252672]: 2026-02-02 10:07:37.894 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:37 np0005604790 systemd-coredump[265060]: Process 260466 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007f24afd3532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 05:07:38 np0005604790 systemd[1]: systemd-coredump@11-265059-0.service: Deactivated successfully.
Feb  2 05:07:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:38.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:38 np0005604790 podman[265067]: 2026-02-02 10:07:38.066211008 +0000 UTC m=+0.036256201 container died 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 05:07:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c9561751e4623e3e7b37c64f494d5a2bba79e6d84e8b5460a4dee8f04c63918b-merged.mount: Deactivated successfully.
Feb  2 05:07:38 np0005604790 podman[265067]: 2026-02-02 10:07:38.449590414 +0000 UTC m=+0.419635577 container remove 98cf6dae1ab8feec23d34379e4cd365c1f4e26263e73f93cb34bc5dd2a59d411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:07:38 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 05:07:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:38.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:38 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 05:07:38 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.488s CPU time.
Feb  2 05:07:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Feb  2 05:07:39 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:39.744 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:07:39 np0005604790 nova_compute[252672]: 2026-02-02 10:07:39.746 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:39 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:39.745 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:07:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:40 np0005604790 nova_compute[252672]: 2026-02-02 10:07:40.418 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:40.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 4.7 KiB/s wr, 0 op/s
Feb  2 05:07:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:42.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 4.7 KiB/s wr, 0 op/s
Feb  2 05:07:42 np0005604790 nova_compute[252672]: 2026-02-02 10:07:42.897 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100742 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:07:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:44.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Feb  2 05:07:44 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:44.748 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:07:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:44] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:07:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:44] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:07:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100744 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:07:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:45.379 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:45.380 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:07:45.380 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:45 np0005604790 nova_compute[252672]: 2026-02-02 10:07:45.421 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:07:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5758 writes, 25K keys, 5758 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5758 writes, 5758 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1560 writes, 6634 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 11.28 MB, 0.02 MB/s#012Interval WAL: 1560 writes, 1560 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     64.7      0.62              0.10        14    0.044       0      0       0.0       0.0#012  L6      1/0   11.79 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.0    114.3     97.3      1.67              0.45        13    0.128     67K   6910       0.0       0.0#012 Sum      1/0   11.79 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   5.0     83.3     88.5      2.29              0.55        27    0.085     67K   6910       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    101.0     99.8      0.71              0.21        10    0.071     29K   2565       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    114.3     97.3      1.67              0.45        13    0.128     67K   6910       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     65.2      0.61              0.10        13    0.047       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.039, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 2.3 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630b94e5350#2 capacity: 304.00 MB usage: 15.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000233 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(816,14.82 MB,4.87386%) FilterBlock(28,202.42 KB,0.0650255%) IndexBlock(28,353.83 KB,0.113663%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 05:07:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 0 op/s
Feb  2 05:07:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:47.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:07:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:07:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:07:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:07:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:47 np0005604790 nova_compute[252672]: 2026-02-02 10:07:47.933 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 6.3 KiB/s wr, 1 op/s
Feb  2 05:07:48 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 12.
Feb  2 05:07:48 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:07:48 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.488s CPU time.
Feb  2 05:07:48 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 05:07:49 np0005604790 podman[265197]: 2026-02-02 10:07:49.052208857 +0000 UTC m=+0.087437035 container create df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 05:07:49 np0005604790 podman[265197]: 2026-02-02 10:07:48.99227306 +0000 UTC m=+0.027501328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:07:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace16943a8d021739a247b6e9fe641916fd292dde1f29b27396041c5e7bc57a5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace16943a8d021739a247b6e9fe641916fd292dde1f29b27396041c5e7bc57a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace16943a8d021739a247b6e9fe641916fd292dde1f29b27396041c5e7bc57a5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace16943a8d021739a247b6e9fe641916fd292dde1f29b27396041c5e7bc57a5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:07:49 np0005604790 podman[265197]: 2026-02-02 10:07:49.158056877 +0000 UTC m=+0.193285135 container init df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:07:49 np0005604790 podman[265197]: 2026-02-02 10:07:49.163620495 +0000 UTC m=+0.198848713 container start df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 05:07:49 np0005604790 bash[265197]: df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4
Feb  2 05:07:49 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 05:07:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.759 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.760 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.812 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.947 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.948 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.955 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 05:07:49 np0005604790 nova_compute[252672]: 2026-02-02 10:07:49.956 252676 INFO nova.compute.claims [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 05:07:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.075 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.463 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:07:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162884263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.529 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.536 252676 DEBUG nova.compute.provider_tree [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.580 252676 DEBUG nova.scheduler.client.report [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:07:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.617 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.618 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 05:07:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 6.3 KiB/s wr, 0 op/s
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.672 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.672 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.690 252676 INFO nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.729 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.872 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.873 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.874 252676 INFO nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Creating image(s)#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.898 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.930 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.963 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:50 np0005604790 nova_compute[252672]: 2026-02-02 10:07:50.969 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.054 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.056 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.057 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.058 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.098 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.103 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 53d9b1a9-575b-44c3-b11d-5995012c603a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.146 252676 DEBUG nova.policy [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b1695a2a70d4aa0aa350ba17d8f6d5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.386 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 53d9b1a9-575b-44c3-b11d-5995012c603a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.455 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] resizing rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.581 252676 DEBUG nova.objects.instance [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'migration_context' on Instance uuid 53d9b1a9-575b-44c3-b11d-5995012c603a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.605 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.606 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Ensure instance console log exists: /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.606 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.607 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:51 np0005604790 nova_compute[252672]: 2026-02-02 10:07:51.607 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:07:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:07:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 6.3 KiB/s wr, 0 op/s
Feb  2 05:07:52 np0005604790 nova_compute[252672]: 2026-02-02 10:07:52.936 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:54 np0005604790 nova_compute[252672]: 2026-02-02 10:07:54.292 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Successfully created port: 78ac7631-d520-4460-9820-85b034d05a47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 05:07:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb  2 05:07:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:07:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:07:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:07:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:07:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458817904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:07:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:07:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458817904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:07:55 np0005604790 nova_compute[252672]: 2026-02-02 10:07:55.468 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:07:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:07:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:56 np0005604790 podman[265450]: 2026-02-02 10:07:56.432612565 +0000 UTC m=+0.141868485 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.549 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Successfully updated port: 78ac7631-d520-4460-9820-85b034d05a47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.596 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.597 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.597 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:07:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:07:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:07:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.695 252676 DEBUG nova.compute.manager [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-changed-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.696 252676 DEBUG nova.compute.manager [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Refreshing instance network info cache due to event network-changed-78ac7631-d520-4460-9820-85b034d05a47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:07:56 np0005604790 nova_compute[252672]: 2026-02-02 10:07:56.697 252676 DEBUG oslo_concurrency.lockutils [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:07:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:07:57.135Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:07:57 np0005604790 nova_compute[252672]: 2026-02-02 10:07:57.170 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 05:07:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:07:57 np0005604790 nova_compute[252672]: 2026-02-02 10:07:57.940 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.002 252676 DEBUG nova.network.neutron [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updating instance_info_cache with network_info: [{"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.022 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.023 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Instance network_info: |[{"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.024 252676 DEBUG oslo_concurrency.lockutils [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.024 252676 DEBUG nova.network.neutron [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Refreshing network info cache for port 78ac7631-d520-4460-9820-85b034d05a47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.029 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Start _get_guest_xml network_info=[{"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd5e062d7-95ef-409c-9ad0-60f7cf6f44ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.037 252676 WARNING nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.041 252676 DEBUG nova.virt.libvirt.host [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.042 252676 DEBUG nova.virt.libvirt.host [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.054 252676 DEBUG nova.virt.libvirt.host [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.055 252676 DEBUG nova.virt.libvirt.host [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.056 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.056 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T10:01:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1194feb9-e285-414e-825a-1e77171d092f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.057 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.058 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.058 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.058 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.059 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.059 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.060 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.060 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.060 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.061 252676 DEBUG nova.virt.hardware [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.066 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:07:58.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:07:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:07:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:07:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:07:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:07:58 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:07:58 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184578567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.577 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:07:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:07:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:07:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.619 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:58 np0005604790 nova_compute[252672]: 2026-02-02 10:07:58.625 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb  2 05:07:59 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:07:59 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2630262200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.085 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.087 252676 DEBUG nova.virt.libvirt.vif [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-355732534',display_name='tempest-TestNetworkBasicOps-server-355732534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-355732534',id=7,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPTRdyDyOro02zcZovzIXe9vITTMyq5TwzlgQ3dykKB+yswJAZhQnNNAhQdaRP1t7jc8pome8uY1/pM4AXxSNJWyd6YYrM85SO+8YpHGgHMgUTkXjtCiKGfZUokBgkv5OA==',key_name='tempest-TestNetworkBasicOps-977589391',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-bmc8mzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:07:50Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=53d9b1a9-575b-44c3-b11d-5995012c603a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.088 252676 DEBUG nova.network.os_vif_util [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.088 252676 DEBUG nova.network.os_vif_util [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.089 252676 DEBUG nova.objects.instance [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 53d9b1a9-575b-44c3-b11d-5995012c603a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.107 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] End _get_guest_xml xml=<domain type="kvm">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <uuid>53d9b1a9-575b-44c3-b11d-5995012c603a</uuid>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <name>instance-00000007</name>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <memory>131072</memory>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <vcpu>1</vcpu>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:name>tempest-TestNetworkBasicOps-server-355732534</nova:name>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:creationTime>2026-02-02 10:07:58</nova:creationTime>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:flavor name="m1.nano">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:memory>128</nova:memory>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:disk>1</nova:disk>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:swap>0</nova:swap>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:vcpus>1</nova:vcpus>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </nova:flavor>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:owner>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </nova:owner>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <nova:ports>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <nova:port uuid="78ac7631-d520-4460-9820-85b034d05a47">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        </nova:port>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </nova:ports>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </nova:instance>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <sysinfo type="smbios">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="manufacturer">RDO</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="product">OpenStack Compute</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="serial">53d9b1a9-575b-44c3-b11d-5995012c603a</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="uuid">53d9b1a9-575b-44c3-b11d-5995012c603a</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <entry name="family">Virtual Machine</entry>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <boot dev="hd"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <smbios mode="sysinfo"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <vmcoreinfo/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <clock offset="utc">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <timer name="hpet" present="no"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <cpu mode="host-model" match="exact">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <disk type="network" device="disk">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/53d9b1a9-575b-44c3-b11d-5995012c603a_disk">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <target dev="vda" bus="virtio"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <disk type="network" device="cdrom">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <target dev="sda" bus="sata"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <interface type="ethernet">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <mac address="fa:16:3e:69:65:d8"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <mtu size="1442"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <target dev="tap78ac7631-d5"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <serial type="pty">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <log file="/var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/console.log" append="off"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <input type="tablet" bus="usb"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <rng model="virtio">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <backend model="random">/dev/urandom</backend>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <controller type="usb" index="0"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    <memballoon model="virtio">
Feb  2 05:07:59 np0005604790 nova_compute[252672]:      <stats period="10"/>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:07:59 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:07:59 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:07:59 np0005604790 nova_compute[252672]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.109 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Preparing to wait for external event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.109 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.110 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.110 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.111 252676 DEBUG nova.virt.libvirt.vif [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-355732534',display_name='tempest-TestNetworkBasicOps-server-355732534',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-355732534',id=7,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPTRdyDyOro02zcZovzIXe9vITTMyq5TwzlgQ3dykKB+yswJAZhQnNNAhQdaRP1t7jc8pome8uY1/pM4AXxSNJWyd6YYrM85SO+8YpHGgHMgUTkXjtCiKGfZUokBgkv5OA==',key_name='tempest-TestNetworkBasicOps-977589391',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-bmc8mzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:07:50Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=53d9b1a9-575b-44c3-b11d-5995012c603a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.112 252676 DEBUG nova.network.os_vif_util [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.113 252676 DEBUG nova.network.os_vif_util [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.114 252676 DEBUG os_vif [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.115 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.115 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.116 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.121 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.122 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78ac7631-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.123 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78ac7631-d5, col_values=(('external_ids', {'iface-id': '78ac7631-d520-4460-9820-85b034d05a47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:65:d8', 'vm-uuid': '53d9b1a9-575b-44c3-b11d-5995012c603a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:07:59 np0005604790 NetworkManager[49024]: <info>  [1770026879.1276] manager: (tap78ac7631-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.126 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.131 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.135 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.136 252676 INFO os_vif [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5')#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.191 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.192 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.193 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:69:65:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.194 252676 INFO nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Using config drive#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.233 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.609 252676 INFO nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Creating config drive at /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.613 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfd0b8ezp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.640 252676 DEBUG nova.network.neutron [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updated VIF entry in instance network info cache for port 78ac7631-d520-4460-9820-85b034d05a47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.641 252676 DEBUG nova.network.neutron [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updating instance_info_cache with network_info: [{"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.657 252676 DEBUG oslo_concurrency.lockutils [req-8ab8c10f-4eb7-4bde-84a8-d3bf59e993ec req-20fa8c2b-daa0-4fce-9a52-e184566b0197 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.745 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfd0b8ezp" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.785 252676 DEBUG nova.storage.rbd_utils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.790 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config 53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.958 252676 DEBUG oslo_concurrency.processutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config 53d9b1a9-575b-44c3-b11d-5995012c603a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:07:59 np0005604790 nova_compute[252672]: 2026-02-02 10:07:59.959 252676 INFO nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Deleting local config drive /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a/disk.config because it was imported into RBD.#033[00m
Feb  2 05:08:00 np0005604790 kernel: tap78ac7631-d5: entered promiscuous mode
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.0140] manager: (tap78ac7631-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Feb  2 05:08:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:00Z|00069|binding|INFO|Claiming lport 78ac7631-d520-4460-9820-85b034d05a47 for this chassis.
Feb  2 05:08:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:00Z|00070|binding|INFO|78ac7631-d520-4460-9820-85b034d05a47: Claiming fa:16:3e:69:65:d8 10.100.0.27
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.051 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.063 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:65:d8 10.100.0.27'], port_security=['fa:16:3e:69:65:d8 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '53d9b1a9-575b-44c3-b11d-5995012c603a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e125f54e-7556-49c5-8356-e7390df43c53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60fdd9e7-a6d5-4384-bee0-da9bfe0dd977', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9d42b65-630e-4d58-b649-2acc01d097b4, chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=78ac7631-d520-4460-9820-85b034d05a47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.064 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 78ac7631-d520-4460-9820-85b034d05a47 in datapath e125f54e-7556-49c5-8356-e7390df43c53 bound to our chassis#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.065 165364 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e125f54e-7556-49c5-8356-e7390df43c53#033[00m
Feb  2 05:08:00 np0005604790 systemd-machined[219024]: New machine qemu-4-instance-00000007.
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.080 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe60c65-6646-4357-9d47-7392d30631be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.081 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape125f54e-71 in ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.081 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.083 257524 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape125f54e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.083 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[750c0a7e-62e3-490b-9bea-78b5c3f8f1cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:00Z|00071|binding|INFO|Setting lport 78ac7631-d520-4460-9820-85b034d05a47 ovn-installed in OVS
Feb  2 05:08:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:00Z|00072|binding|INFO|Setting lport 78ac7631-d520-4460-9820-85b034d05a47 up in Southbound
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.085 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5f717f-5103-4852-8588-ea72c554e2a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.085 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 systemd[1]: Started Virtual Machine qemu-4-instance-00000007.
Feb  2 05:08:00 np0005604790 systemd-udevd[265641]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.101 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebbbd48-da5f-4386-aa8c-37d987326832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.1145] device (tap78ac7631-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.1153] device (tap78ac7631-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.117 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8f38d1-01e3-4fb5-a3db-3369308277a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.143 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[e610327b-ec9d-47c3-bace-753708ab15e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 systemd-udevd[265644]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.149 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[154041f4-b4a8-4f20-bdab-32fa4eebac59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.1503] manager: (tape125f54e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.179 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[04e0ef98-6184-42f9-a8c3-0f8ba3f264d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.182 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[c9563fae-ed52-4371-8278-b48eea0d18c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.2068] device (tape125f54e-70): carrier: link connected
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.209 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee415ec-d43d-4186-bb90-9ca302f4ceaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.225 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[2105c698-e006-42fc-8b42-8aaba1c1d14c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape125f54e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:b7:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403949, 'reachable_time': 40153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265672, 'error': None, 'target': 'ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.242 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a0867e-bdab-411b-8cfd-fdef75a2dbe5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:b741'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 403949, 'tstamp': 403949}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265673, 'error': None, 'target': 'ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.258 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[81094b6a-c3b7-40b8-9a32-fcb057be8e35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape125f54e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:b7:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403949, 'reachable_time': 40153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265674, 'error': None, 'target': 'ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.291 252676 DEBUG nova.compute.manager [req-1cce61fb-5b36-4135-a33d-966d71cf5e2f req-ce6752bf-0b5b-4aab-af54-db9ffe80b8d8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.292 252676 DEBUG oslo_concurrency.lockutils [req-1cce61fb-5b36-4135-a33d-966d71cf5e2f req-ce6752bf-0b5b-4aab-af54-db9ffe80b8d8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.292 252676 DEBUG oslo_concurrency.lockutils [req-1cce61fb-5b36-4135-a33d-966d71cf5e2f req-ce6752bf-0b5b-4aab-af54-db9ffe80b8d8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.291 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[cafb7203-8cf6-4b1d-ba69-4234cee85e2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.292 252676 DEBUG oslo_concurrency.lockutils [req-1cce61fb-5b36-4135-a33d-966d71cf5e2f req-ce6752bf-0b5b-4aab-af54-db9ffe80b8d8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.292 252676 DEBUG nova.compute.manager [req-1cce61fb-5b36-4135-a33d-966d71cf5e2f req-ce6752bf-0b5b-4aab-af54-db9ffe80b8d8 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Processing event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.357 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[8761462b-50bb-47a7-903f-d0be349adae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.359 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape125f54e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.359 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.360 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape125f54e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.362 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 NetworkManager[49024]: <info>  [1770026880.3636] manager: (tape125f54e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb  2 05:08:00 np0005604790 kernel: tape125f54e-70: entered promiscuous mode
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.365 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.366 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape125f54e-70, col_values=(('external_ids', {'iface-id': '4948ba2f-4901-4550-ab74-f4adf1b82ea1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:08:00 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:00Z|00073|binding|INFO|Releasing lport 4948ba2f-4901-4550-ab74-f4adf1b82ea1 from this chassis (sb_readonly=0)
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.368 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.377 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.378 165364 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e125f54e-7556-49c5-8356-e7390df43c53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e125f54e-7556-49c5-8356-e7390df43c53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.380 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9aa047-f346-454a-982e-53ba92c4eef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.381 165364 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: global
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    log         /dev/log local0 debug
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    log-tag     haproxy-metadata-proxy-e125f54e-7556-49c5-8356-e7390df43c53
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    user        root
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    group       root
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    maxconn     1024
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    pidfile     /var/lib/neutron/external/pids/e125f54e-7556-49c5-8356-e7390df43c53.pid.haproxy
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    daemon
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: defaults
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    log global
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    mode http
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    option httplog
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    option dontlognull
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    option http-server-close
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    option forwardfor
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    retries                 3
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    timeout http-request    30s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    timeout connect         30s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    timeout client          32s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    timeout server          32s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    timeout http-keep-alive 30s
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: listen listener
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    bind 169.254.169.254:80
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]:    http-request add-header X-OVN-Network-ID e125f54e-7556-49c5-8356-e7390df43c53
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 05:08:00 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:00.382 165364 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53', 'env', 'PROCESS_TAG=haproxy-e125f54e-7556-49c5-8356-e7390df43c53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e125f54e-7556-49c5-8356-e7390df43c53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 05:08:00 np0005604790 nova_compute[252672]: 2026-02-02 10:08:00.471 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:00.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb  2 05:08:00 np0005604790 podman[265706]: 2026-02-02 10:08:00.782799472 +0000 UTC m=+0.074829521 container create 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 05:08:00 np0005604790 systemd[1]: Started libpod-conmon-06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416.scope.
Feb  2 05:08:00 np0005604790 podman[265706]: 2026-02-02 10:08:00.744475538 +0000 UTC m=+0.036505627 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 05:08:00 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:00 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6259d05f4c6668caf1cdce1908a563f4f89105371df8c5f83a175a1996cee86/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:00 np0005604790 podman[265706]: 2026-02-02 10:08:00.875748262 +0000 UTC m=+0.167778321 container init 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb  2 05:08:00 np0005604790 podman[265706]: 2026-02-02 10:08:00.879948543 +0000 UTC m=+0.171978592 container start 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:08:00 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [NOTICE]   (265725) : New worker (265727) forked
Feb  2 05:08:00 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [NOTICE]   (265725) : Loading success.
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.148 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.149 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026881.1476703, 53d9b1a9-575b-44c3-b11d-5995012c603a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.149 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] VM Started (Lifecycle Event)#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.152 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.155 252676 INFO nova.virt.libvirt.driver [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Instance spawned successfully.#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.156 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.175 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.180 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.183 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.183 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.184 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.184 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.184 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.185 252676 DEBUG nova.virt.libvirt.driver [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.229 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.229 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026881.1489096, 53d9b1a9-575b-44c3-b11d-5995012c603a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.229 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] VM Paused (Lifecycle Event)#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.258 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.261 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770026881.1517034, 53d9b1a9-575b-44c3-b11d-5995012c603a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.261 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] VM Resumed (Lifecycle Event)#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.281 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.284 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.303 252676 INFO nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Took 10.43 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.304 252676 DEBUG nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.352 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.394 252676 INFO nova.compute.manager [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Took 11.48 seconds to build instance.#033[00m
Feb  2 05:08:01 np0005604790 nova_compute[252672]: 2026-02-02 10:08:01.418 252676 DEBUG oslo_concurrency.lockutils [None req-0e4e057a-fe17-4d45-bee1-87576f00fb99 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Feb  2 05:08:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:08:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:08:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.397 252676 DEBUG nova.compute.manager [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.398 252676 DEBUG oslo_concurrency.lockutils [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.398 252676 DEBUG oslo_concurrency.lockutils [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.399 252676 DEBUG oslo_concurrency.lockutils [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.399 252676 DEBUG nova.compute.manager [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] No waiting events found dispatching network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:08:02 np0005604790 nova_compute[252672]: 2026-02-02 10:08:02.400 252676 WARNING nova.compute.manager [req-a565a69d-19b5-438b-8ffa-a233d798ee19 req-145d8511-a2ac-49ba-9c51-dfb52ca05bdc b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received unexpected event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:08:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Feb  2 05:08:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:03 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:04.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:04 np0005604790 nova_compute[252672]: 2026-02-02 10:08:04.127 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Feb  2 05:08:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:04] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:04] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100804 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:08:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:05 np0005604790 podman[265798]: 2026-02-02 10:08:05.421996919 +0000 UTC m=+0.135878727 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb  2 05:08:05 np0005604790 nova_compute[252672]: 2026-02-02 10:08:05.474 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:05 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Feb  2 05:08:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100806 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Feb  2 05:08:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:07.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:08:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:07.136Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:08:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:07.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:08:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:07 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.315 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.316 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.316 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.317 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.317 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:08:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 77 op/s
Feb  2 05:08:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:08:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440810205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.749 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.916 252676 DEBUG nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 05:08:08 np0005604790 nova_compute[252672]: 2026-02-02 10:08:08.917 252676 DEBUG nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb  2 05:08:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0021b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.078 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.079 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4248MB free_disk=59.92169952392578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.079 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.079 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.177 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.242 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Instance 53d9b1a9-575b-44c3-b11d-5995012c603a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.243 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.243 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.329 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing inventories for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.348 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating ProviderTree inventory for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.349 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating inventory in ProviderTree for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.365 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing aggregate associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.402 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing trait associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.443 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.917 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.922 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:08:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:09 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.944 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.970 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:08:09 np0005604790 nova_compute[252672]: 2026-02-02 10:08:09.970 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:10 np0005604790 nova_compute[252672]: 2026-02-02 10:08:10.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:10 np0005604790 nova_compute[252672]: 2026-02-02 10:08:10.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:08:10 np0005604790 nova_compute[252672]: 2026-02-02 10:08:10.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:08:10 np0005604790 nova_compute[252672]: 2026-02-02 10:08:10.477 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:10.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Feb  2 05:08:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:11 np0005604790 nova_compute[252672]: 2026-02-02 10:08:11.158 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:08:11 np0005604790 nova_compute[252672]: 2026-02-02 10:08:11.159 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquired lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:08:11 np0005604790 nova_compute[252672]: 2026-02-02 10:08:11.159 252676 DEBUG nova.network.neutron [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb  2 05:08:11 np0005604790 nova_compute[252672]: 2026-02-02 10:08:11.159 252676 DEBUG nova.objects.instance [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53d9b1a9-575b-44c3-b11d-5995012c603a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:08:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:11 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c002350 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784001dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:12.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Feb  2 05:08:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.591 252676 DEBUG nova.network.neutron [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updating instance_info_cache with network_info: [{"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.630 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Releasing lock "refresh_cache-53d9b1a9-575b-44c3-b11d-5995012c603a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.630 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.631 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.631 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.632 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.632 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.632 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.633 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.633 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.634 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.634 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 05:08:13 np0005604790 nova_compute[252672]: 2026-02-02 10:08:13.662 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 05:08:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:13 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:14.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:14 np0005604790 nova_compute[252672]: 2026-02-02 10:08:14.181 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:14 np0005604790 nova_compute[252672]: 2026-02-02 10:08:14.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:14 np0005604790 nova_compute[252672]: 2026-02-02 10:08:14.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:14 np0005604790 nova_compute[252672]: 2026-02-02 10:08:14.325 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:14 np0005604790 nova_compute[252672]: 2026-02-02 10:08:14.326 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 05:08:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:14Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:65:d8 10.100.0.27
Feb  2 05:08:14 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:14Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:65:d8 10.100.0.27
Feb  2 05:08:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Feb  2 05:08:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:14] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:14] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840031a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:15 np0005604790 nova_compute[252672]: 2026-02-02 10:08:15.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:15 np0005604790 nova_compute[252672]: 2026-02-02 10:08:15.479 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:15 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:16.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 188 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 283 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb  2 05:08:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c0095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:17.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:08:17
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.nfs', 'cephfs.cephfs.data', 'vms', 'volumes', '.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:08:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:08:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001505637189066875 of space, bias 1.0, pg target 0.4516911567200625 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:08:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:08:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:17 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840031a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:18.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:18.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:08:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:19 np0005604790 nova_compute[252672]: 2026-02-02 10:08:19.222 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:19 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:20.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:20 np0005604790 nova_compute[252672]: 2026-02-02 10:08:20.494 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:08:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:21 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:22.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb  2 05:08:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb  2 05:08:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:23 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:24 np0005604790 nova_compute[252672]: 2026-02-02 10:08:24.224 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:24.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Feb  2 05:08:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:24] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:08:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:24] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:08:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:25 np0005604790 nova_compute[252672]: 2026-02-02 10:08:25.496 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:25 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:26.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 115 KiB/s wr, 16 op/s
Feb  2 05:08:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:27.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:08:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:27 np0005604790 podman[265908]: 2026-02-02 10:08:27.40168859 +0000 UTC m=+0.113988778 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 05:08:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:27 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:28.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:28.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 122 KiB/s wr, 17 op/s
Feb  2 05:08:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:29 np0005604790 nova_compute[252672]: 2026-02-02 10:08:29.228 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:29 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:30.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:30 np0005604790 nova_compute[252672]: 2026-02-02 10:08:30.539 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:30.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 27 KiB/s wr, 2 op/s
Feb  2 05:08:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:31 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:08:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:08:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:32.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 27 KiB/s wr, 2 op/s
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:08:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.322875927 +0000 UTC m=+0.042816404 container create d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:08:33 np0005604790 systemd[1]: Started libpod-conmon-d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3.scope.
Feb  2 05:08:33 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.305801835 +0000 UTC m=+0.025742272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.421537228 +0000 UTC m=+0.141477755 container init d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.433207867 +0000 UTC m=+0.153148304 container start d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.437008108 +0000 UTC m=+0.156948625 container attach d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:08:33 np0005604790 unruffled_dijkstra[266200]: 167 167
Feb  2 05:08:33 np0005604790 systemd[1]: libpod-d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3.scope: Deactivated successfully.
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.441064595 +0000 UTC m=+0.161005032 container died d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 05:08:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1b70cdfd12557b5d9650e297eacecbc1cdccbb9f418cc3f2b62d8431a17dee89-merged.mount: Deactivated successfully.
Feb  2 05:08:33 np0005604790 podman[266183]: 2026-02-02 10:08:33.483372525 +0000 UTC m=+0.203312962 container remove d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_dijkstra, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:08:33 np0005604790 systemd[1]: libpod-conmon-d5463d61c7f729d0083dd655cfc47a9d3c4d099fd5bb6b678aa443091416dcf3.scope: Deactivated successfully.
Feb  2 05:08:33 np0005604790 podman[266226]: 2026-02-02 10:08:33.652675786 +0000 UTC m=+0.050261481 container create 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:08:33 np0005604790 systemd[1]: Started libpod-conmon-78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f.scope.
Feb  2 05:08:33 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:33 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:33 np0005604790 podman[266226]: 2026-02-02 10:08:33.625668061 +0000 UTC m=+0.023253856 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:33 np0005604790 podman[266226]: 2026-02-02 10:08:33.727532987 +0000 UTC m=+0.125118682 container init 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 05:08:33 np0005604790 podman[266226]: 2026-02-02 10:08:33.732114219 +0000 UTC m=+0.129699914 container start 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:08:33 np0005604790 podman[266226]: 2026-02-02 10:08:33.735523389 +0000 UTC m=+0.133109104 container attach 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:08:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:33 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:33 np0005604790 cool_feistel[266244]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:08:33 np0005604790 cool_feistel[266244]: --> All data devices are unavailable
Feb  2 05:08:34 np0005604790 systemd[1]: libpod-78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f.scope: Deactivated successfully.
Feb  2 05:08:34 np0005604790 podman[266226]: 2026-02-02 10:08:34.009573842 +0000 UTC m=+0.407159577 container died 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True)
Feb  2 05:08:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f805a5a42875556b619f40bcb3907367b7fb2089a24396593758e6fe12b44cfc-merged.mount: Deactivated successfully.
Feb  2 05:08:34 np0005604790 podman[266226]: 2026-02-02 10:08:34.045315048 +0000 UTC m=+0.442900753 container remove 78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:08:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:34 np0005604790 systemd[1]: libpod-conmon-78e8c729fc73d3ccded2aabaedd1df3d9ac7591909a4ee430a231a52199c531f.scope: Deactivated successfully.
Feb  2 05:08:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:34.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:34 np0005604790 nova_compute[252672]: 2026-02-02 10:08:34.231 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 30 KiB/s wr, 3 op/s
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.684948998 +0000 UTC m=+0.058072778 container create d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:08:34 np0005604790 systemd[1]: Started libpod-conmon-d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98.scope.
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.659395252 +0000 UTC m=+0.032519082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.780960579 +0000 UTC m=+0.154084409 container init d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.791817216 +0000 UTC m=+0.164940976 container start d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.796447399 +0000 UTC m=+0.169571249 container attach d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 05:08:34 np0005604790 cranky_meninsky[266377]: 167 167
Feb  2 05:08:34 np0005604790 systemd[1]: libpod-d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98.scope: Deactivated successfully.
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.799851399 +0000 UTC m=+0.172975159 container died d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:08:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f8e7c5881136506a96da35cca9ba60448d0ca8a1630cfec66992f8f55bc21696-merged.mount: Deactivated successfully.
Feb  2 05:08:34 np0005604790 podman[266361]: 2026-02-02 10:08:34.848366153 +0000 UTC m=+0.221489933 container remove d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:08:34 np0005604790 systemd[1]: libpod-conmon-d1385359dd68d94216c1a01037cefb82d0613616bfa9c4d887952997de366a98.scope: Deactivated successfully.
Feb  2 05:08:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:34] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:34] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:35 np0005604790 podman[266403]: 2026-02-02 10:08:35.00276811 +0000 UTC m=+0.037351360 container create ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 05:08:35 np0005604790 systemd[1]: Started libpod-conmon-ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd.scope.
Feb  2 05:08:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3155b935f8fdc95be2765beee0cf233aed256d740151a418bf6950a2d43ac9da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3155b935f8fdc95be2765beee0cf233aed256d740151a418bf6950a2d43ac9da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3155b935f8fdc95be2765beee0cf233aed256d740151a418bf6950a2d43ac9da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3155b935f8fdc95be2765beee0cf233aed256d740151a418bf6950a2d43ac9da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:35 np0005604790 podman[266403]: 2026-02-02 10:08:34.987901446 +0000 UTC m=+0.022484706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:35 np0005604790 podman[266403]: 2026-02-02 10:08:35.089244659 +0000 UTC m=+0.123827909 container init ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:08:35 np0005604790 podman[266403]: 2026-02-02 10:08:35.105635352 +0000 UTC m=+0.140218632 container start ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 05:08:35 np0005604790 podman[266403]: 2026-02-02 10:08:35.109369321 +0000 UTC m=+0.143952741 container attach ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]: {
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:    "1": [
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:        {
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "devices": [
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "/dev/loop3"
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            ],
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "lv_name": "ceph_lv0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "lv_size": "21470642176",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "name": "ceph_lv0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "tags": {
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.cluster_name": "ceph",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.crush_device_class": "",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.encrypted": "0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.osd_id": "1",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.type": "block",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.vdo": "0",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:                "ceph.with_tpm": "0"
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            },
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "type": "block",
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:            "vg_name": "ceph_vg0"
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:        }
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]:    ]
Feb  2 05:08:35 np0005604790 compassionate_spence[266421]: }
Feb  2 05:08:35 np0005604790 systemd[1]: libpod-ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd.scope: Deactivated successfully.
Feb  2 05:08:35 np0005604790 podman[266430]: 2026-02-02 10:08:35.463417072 +0000 UTC m=+0.029462431 container died ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 05:08:35 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3155b935f8fdc95be2765beee0cf233aed256d740151a418bf6950a2d43ac9da-merged.mount: Deactivated successfully.
Feb  2 05:08:35 np0005604790 podman[266430]: 2026-02-02 10:08:35.507620252 +0000 UTC m=+0.073665631 container remove ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 05:08:35 np0005604790 systemd[1]: libpod-conmon-ab4dc37fb9c0a8f3e92dc69b8bd27518b27513424d328dda13bcffb1cb7b29fd.scope: Deactivated successfully.
Feb  2 05:08:35 np0005604790 nova_compute[252672]: 2026-02-02 10:08:35.592 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:35 np0005604790 podman[266446]: 2026-02-02 10:08:35.632308802 +0000 UTC m=+0.127437084 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 05:08:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:35 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.100864153 +0000 UTC m=+0.070030955 container create c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:08:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:36.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:36 np0005604790 systemd[1]: Started libpod-conmon-c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db.scope.
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.068960778 +0000 UTC m=+0.038127640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:36 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.196038912 +0000 UTC m=+0.165205734 container init c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.20506039 +0000 UTC m=+0.174227152 container start c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.208899572 +0000 UTC m=+0.178066424 container attach c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:08:36 np0005604790 trusting_almeida[266574]: 167 167
Feb  2 05:08:36 np0005604790 systemd[1]: libpod-c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db.scope: Deactivated successfully.
Feb  2 05:08:36 np0005604790 conmon[266574]: conmon c0256baee7f57e61676f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db.scope/container/memory.events
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.213307149 +0000 UTC m=+0.182473921 container died c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 05:08:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-71ef4985cdd8dad2ce69a54a4118b161bd44e78efbf0d5b3f32cefca2655cfd6-merged.mount: Deactivated successfully.
Feb  2 05:08:36 np0005604790 podman[266558]: 2026-02-02 10:08:36.262678485 +0000 UTC m=+0.231845287 container remove c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 05:08:36 np0005604790 systemd[1]: libpod-conmon-c0256baee7f57e61676f989ed31ea704d2242618cb21c6aded48a33857e0a6db.scope: Deactivated successfully.
Feb  2 05:08:36 np0005604790 podman[266600]: 2026-02-02 10:08:36.440585374 +0000 UTC m=+0.044735585 container create 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 05:08:36 np0005604790 systemd[1]: Started libpod-conmon-6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc.scope.
Feb  2 05:08:36 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:08:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab160fa7a50366a0849faba6bc2351aee9406d0c7e21a7e015c13b11d1c6d21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab160fa7a50366a0849faba6bc2351aee9406d0c7e21a7e015c13b11d1c6d21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab160fa7a50366a0849faba6bc2351aee9406d0c7e21a7e015c13b11d1c6d21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:36 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab160fa7a50366a0849faba6bc2351aee9406d0c7e21a7e015c13b11d1c6d21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:08:36 np0005604790 podman[266600]: 2026-02-02 10:08:36.424100198 +0000 UTC m=+0.028250429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:08:36 np0005604790 podman[266600]: 2026-02-02 10:08:36.533321719 +0000 UTC m=+0.137471970 container init 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 05:08:36 np0005604790 podman[266600]: 2026-02-02 10:08:36.550581465 +0000 UTC m=+0.154731706 container start 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 05:08:36 np0005604790 podman[266600]: 2026-02-02 10:08:36.554727775 +0000 UTC m=+0.158878076 container attach 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:08:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 9.7 KiB/s wr, 1 op/s
Feb  2 05:08:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:37.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:08:37 np0005604790 lvm[266693]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:08:37 np0005604790 lvm[266693]: VG ceph_vg0 finished
Feb  2 05:08:37 np0005604790 quirky_leakey[266617]: {}
Feb  2 05:08:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:37 np0005604790 systemd[1]: libpod-6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc.scope: Deactivated successfully.
Feb  2 05:08:37 np0005604790 systemd[1]: libpod-6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc.scope: Consumed 1.319s CPU time.
Feb  2 05:08:37 np0005604790 podman[266600]: 2026-02-02 10:08:37.386253004 +0000 UTC m=+0.990403245 container died 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:08:37 np0005604790 systemd[1]: var-lib-containers-storage-overlay-eab160fa7a50366a0849faba6bc2351aee9406d0c7e21a7e015c13b11d1c6d21-merged.mount: Deactivated successfully.
Feb  2 05:08:37 np0005604790 podman[266600]: 2026-02-02 10:08:37.441379353 +0000 UTC m=+1.045529604 container remove 6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_leakey, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:08:37 np0005604790 systemd[1]: libpod-conmon-6a51833277b6eab62143f1525bc836e5740abfb2f9b12571fe324e6dd92f69bc.scope: Deactivated successfully.
Feb  2 05:08:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:08:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:08:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:37 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:38.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:08:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 12 KiB/s wr, 1 op/s
Feb  2 05:08:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:39 np0005604790 nova_compute[252672]: 2026-02-02 10:08:39.267 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:40.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:40 np0005604790 nova_compute[252672]: 2026-02-02 10:08:40.615 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Feb  2 05:08:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:42.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 5.3 KiB/s wr, 1 op/s
Feb  2 05:08:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:44 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:44.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:44 np0005604790 nova_compute[252672]: 2026-02-02 10:08:44.296 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 6.7 KiB/s wr, 1 op/s
Feb  2 05:08:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:44] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:44] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:08:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:45 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:45.380 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:45.381 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:45.381 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.441 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.461 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Triggering sync for uuid 53d9b1a9-575b-44c3-b11d-5995012c603a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.462 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.463 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.496 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:45 np0005604790 nova_compute[252672]: 2026-02-02 10:08:45.654 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:45 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:46 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 0 op/s
Feb  2 05:08:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:47 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:47.140Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:08:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:08:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:08:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:08:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:47 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:48 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:48.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:08:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:08:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 6.0 KiB/s wr, 1 op/s
Feb  2 05:08:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:49 np0005604790 nova_compute[252672]: 2026-02-02 10:08:49.357 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:50 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:50.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 0 op/s
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.710 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.991 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.992 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.992 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.992 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.993 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.995 252676 INFO nova.compute.manager [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Terminating instance#033[00m
Feb  2 05:08:50 np0005604790 nova_compute[252672]: 2026-02-02 10:08:50.996 252676 DEBUG nova.compute.manager [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 05:08:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:51 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:51 np0005604790 kernel: tap78ac7631-d5 (unregistering): left promiscuous mode
Feb  2 05:08:51 np0005604790 NetworkManager[49024]: <info>  [1770026931.0502] device (tap78ac7631-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.057 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:51Z|00074|binding|INFO|Releasing lport 78ac7631-d520-4460-9820-85b034d05a47 from this chassis (sb_readonly=0)
Feb  2 05:08:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:51Z|00075|binding|INFO|Setting lport 78ac7631-d520-4460-9820-85b034d05a47 down in Southbound
Feb  2 05:08:51 np0005604790 ovn_controller[154631]: 2026-02-02T10:08:51Z|00076|binding|INFO|Removing iface tap78ac7631-d5 ovn-installed in OVS
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.062 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.071 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:65:d8 10.100.0.27'], port_security=['fa:16:3e:69:65:d8 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '53d9b1a9-575b-44c3-b11d-5995012c603a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e125f54e-7556-49c5-8356-e7390df43c53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60fdd9e7-a6d5-4384-bee0-da9bfe0dd977', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9d42b65-630e-4d58-b649-2acc01d097b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=78ac7631-d520-4460-9820-85b034d05a47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.073 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.074 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 78ac7631-d520-4460-9820-85b034d05a47 in datapath e125f54e-7556-49c5-8356-e7390df43c53 unbound from our chassis#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.076 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e125f54e-7556-49c5-8356-e7390df43c53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.079 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9a565c-d508-4b3e-8e69-63e6af6871b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.080 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53 namespace which is not needed anymore#033[00m
Feb  2 05:08:51 np0005604790 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb  2 05:08:51 np0005604790 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 15.160s CPU time.
Feb  2 05:08:51 np0005604790 systemd-machined[219024]: Machine qemu-4-instance-00000007 terminated.
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.237 252676 INFO nova.virt.libvirt.driver [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Instance destroyed successfully.#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.238 252676 DEBUG nova.objects.instance [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'resources' on Instance uuid 53d9b1a9-575b-44c3-b11d-5995012c603a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.251 252676 DEBUG nova.compute.manager [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-unplugged-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.252 252676 DEBUG oslo_concurrency.lockutils [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.252 252676 DEBUG oslo_concurrency.lockutils [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.252 252676 DEBUG oslo_concurrency.lockutils [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.253 252676 DEBUG nova.compute.manager [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] No waiting events found dispatching network-vif-unplugged-78ac7631-d520-4460-9820-85b034d05a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.253 252676 DEBUG nova.compute.manager [req-290f3192-59ab-4aa4-83b5-cad07872b987 req-188fb44f-8a4d-4566-a937-301860617030 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-unplugged-78ac7631-d520-4460-9820-85b034d05a47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.256 252676 DEBUG nova.virt.libvirt.vif [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:07:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-355732534',display_name='tempest-TestNetworkBasicOps-server-355732534',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-355732534',id=7,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPTRdyDyOro02zcZovzIXe9vITTMyq5TwzlgQ3dykKB+yswJAZhQnNNAhQdaRP1t7jc8pome8uY1/pM4AXxSNJWyd6YYrM85SO+8YpHGgHMgUTkXjtCiKGfZUokBgkv5OA==',key_name='tempest-TestNetworkBasicOps-977589391',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:08:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-bmc8mzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:08:01Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=53d9b1a9-575b-44c3-b11d-5995012c603a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.256 252676 DEBUG nova.network.os_vif_util [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "78ac7631-d520-4460-9820-85b034d05a47", "address": "fa:16:3e:69:65:d8", "network": {"id": "e125f54e-7556-49c5-8356-e7390df43c53", "bridge": "br-int", "label": "tempest-network-smoke--39971515", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ac7631-d5", "ovs_interfaceid": "78ac7631-d520-4460-9820-85b034d05a47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.257 252676 DEBUG nova.network.os_vif_util [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.257 252676 DEBUG os_vif [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.259 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.259 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78ac7631-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.261 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.263 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.269 252676 INFO os_vif [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:65:d8,bridge_name='br-int',has_traffic_filtering=True,id=78ac7631-d520-4460-9820-85b034d05a47,network=Network(e125f54e-7556-49c5-8356-e7390df43c53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ac7631-d5')#033[00m
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [NOTICE]   (265725) : haproxy version is 2.8.14-c23fe91
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [NOTICE]   (265725) : path to executable is /usr/sbin/haproxy
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [WARNING]  (265725) : Exiting Master process...
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [WARNING]  (265725) : Exiting Master process...
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [ALERT]    (265725) : Current worker (265727) exited with code 143 (Terminated)
Feb  2 05:08:51 np0005604790 neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53[265721]: [WARNING]  (265725) : All workers exited. Exiting... (0)
Feb  2 05:08:51 np0005604790 systemd[1]: libpod-06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416.scope: Deactivated successfully.
Feb  2 05:08:51 np0005604790 podman[266799]: 2026-02-02 10:08:51.281741708 +0000 UTC m=+0.077370899 container died 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 05:08:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416-userdata-shm.mount: Deactivated successfully.
Feb  2 05:08:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a6259d05f4c6668caf1cdce1908a563f4f89105371df8c5f83a175a1996cee86-merged.mount: Deactivated successfully.
Feb  2 05:08:51 np0005604790 podman[266799]: 2026-02-02 10:08:51.338628914 +0000 UTC m=+0.134258145 container cleanup 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:08:51 np0005604790 systemd[1]: libpod-conmon-06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416.scope: Deactivated successfully.
Feb  2 05:08:51 np0005604790 podman[266858]: 2026-02-02 10:08:51.418921699 +0000 UTC m=+0.052326786 container remove 06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.425 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e194e2-43a8-4e89-bf87-2fb0e90cc12b]: (4, ('Mon Feb  2 10:08:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53 (06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416)\n06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416\nMon Feb  2 10:08:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53 (06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416)\n06d11ed54b4b61d6bf104d4ff4f5ea6a1eece6cae636763afaad86da4b7f9416\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.427 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[4baa7b38-0cbc-48a2-99fe-34714182ecba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.428 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape125f54e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.430 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 kernel: tape125f54e-70: left promiscuous mode
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.436 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.440 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e040bed8-75c9-4b8c-ba8b-d5f8a277fddf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.461 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[298ef393-06fd-4198-a5a2-bb7f877d3da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.463 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0f7532-51a7-4ebc-9ba9-59ac3b32df84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.482 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[94234699-dac0-43fe-aaa4-84c2e2088085]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 403942, 'reachable_time': 36776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266872, 'error': None, 'target': 'ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 systemd[1]: run-netns-ovnmeta\x2de125f54e\x2d7556\x2d49c5\x2d8356\x2de7390df43c53.mount: Deactivated successfully.
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.485 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e125f54e-7556-49c5-8356-e7390df43c53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:08:51 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:51.486 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[6d638d57-abc5-4539-a8ba-e6bdd427e98d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.701 252676 INFO nova.virt.libvirt.driver [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Deleting instance files /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a_del#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.702 252676 INFO nova.virt.libvirt.driver [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Deletion of /var/lib/nova/instances/53d9b1a9-575b-44c3-b11d-5995012c603a_del complete#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.770 252676 INFO nova.compute.manager [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.771 252676 DEBUG oslo.service.loopingcall [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.771 252676 DEBUG nova.compute.manager [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 05:08:51 np0005604790 nova_compute[252672]: 2026-02-02 10:08:51.771 252676 DEBUG nova.network.neutron [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 05:08:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:51 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:52 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:52.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.7 KiB/s wr, 0 op/s
Feb  2 05:08:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:53 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:53 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:54 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c002920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:54.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:54.229 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:08:54 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:08:54.230 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.230 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.245 252676 DEBUG nova.compute.manager [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.246 252676 DEBUG oslo_concurrency.lockutils [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.246 252676 DEBUG oslo_concurrency.lockutils [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.246 252676 DEBUG oslo_concurrency.lockutils [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.247 252676 DEBUG nova.compute.manager [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] No waiting events found dispatching network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:08:54 np0005604790 nova_compute[252672]: 2026-02-02 10:08:54.247 252676 WARNING nova.compute.manager [req-db124a2d-a361-45ce-a7bd-47abaf6b7ab0 req-485bd26a-50b6-4f75-be21-adf50689b052 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received unexpected event network-vif-plugged-78ac7631-d520-4460-9820-85b034d05a47 for instance with vm_state active and task_state deleting.#033[00m
Feb  2 05:08:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 7.5 KiB/s wr, 29 op/s
Feb  2 05:08:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:54] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:08:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:08:54] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:08:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.194 252676 DEBUG nova.network.neutron [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.213 252676 INFO nova.compute.manager [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Took 3.44 seconds to deallocate network for instance.#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.259 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.260 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.338 252676 DEBUG oslo_concurrency.processutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.714 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:08:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280504931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.791 252676 DEBUG oslo_concurrency.processutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.798 252676 DEBUG nova.compute.provider_tree [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.818 252676 DEBUG nova.scheduler.client.report [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.843 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.879 252676 INFO nova.scheduler.client.report [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Deleted allocations for instance 53d9b1a9-575b-44c3-b11d-5995012c603a#033[00m
Feb  2 05:08:55 np0005604790 nova_compute[252672]: 2026-02-02 10:08:55.937 252676 DEBUG oslo_concurrency.lockutils [None req-70ea41ab-7414-40ab-9e86-20d953661c2c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "53d9b1a9-575b-44c3-b11d-5995012c603a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:08:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:56 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:56.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:56 np0005604790 nova_compute[252672]: 2026-02-02 10:08:56.289 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:08:56 np0005604790 nova_compute[252672]: 2026-02-02 10:08:56.350 252676 DEBUG nova.compute.manager [req-95dbbcb5-b4f6-4815-96dd-aebcc20bb153 req-8e7dcb03-1dbf-468e-be39-1dd02aa342de b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Received event network-vif-deleted-78ac7631-d520-4460-9820-85b034d05a47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:08:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:56.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.2 KiB/s wr, 28 op/s
Feb  2 05:08:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:57 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:08:57.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:08:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:08:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:57 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:08:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:08:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:08:58 np0005604790 podman[266904]: 2026-02-02 10:08:58.401303943 +0000 UTC m=+0.111214354 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb  2 05:08:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:08:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:08:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:08:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:08:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 8.2 KiB/s wr, 29 op/s
Feb  2 05:08:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:59 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794002520 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:08:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:08:59 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:00 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:00 np0005604790 nova_compute[252672]: 2026-02-02 10:09:00.503 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:00.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Feb  2 05:09:00 np0005604790 nova_compute[252672]: 2026-02-02 10:09:00.716 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:01 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:01.233 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:09:01 np0005604790 nova_compute[252672]: 2026-02-02 10:09:01.333 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:02.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:09:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:09:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:02.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Feb  2 05:09:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:03 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:03 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:04.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:04.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 7.0 KiB/s wr, 56 op/s
Feb  2 05:09:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:04] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:09:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:04] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:09:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:05 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:05 np0005604790 nova_compute[252672]: 2026-02-02 10:09:05.760 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:05 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:06.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:06 np0005604790 nova_compute[252672]: 2026-02-02 10:09:06.233 252676 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770026931.2325768, 53d9b1a9-575b-44c3-b11d-5995012c603a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:09:06 np0005604790 nova_compute[252672]: 2026-02-02 10:09:06.234 252676 INFO nova.compute.manager [-] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] VM Stopped (Lifecycle Event)#033[00m
Feb  2 05:09:06 np0005604790 nova_compute[252672]: 2026-02-02 10:09:06.254 252676 DEBUG nova.compute.manager [None req-23fee1cb-ecb2-4541-a4cb-45467b9fa319 - - - - - -] [instance: 53d9b1a9-575b-44c3-b11d-5995012c603a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:09:06 np0005604790 nova_compute[252672]: 2026-02-02 10:09:06.335 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:06 np0005604790 podman[266965]: 2026-02-02 10:09:06.354688919 +0000 UTC m=+0.066744058 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 05:09:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Feb  2 05:09:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:07 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:07.143Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:09:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:07.143Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:07 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.345 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.346 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.346 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.346 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.346 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:09:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Feb  2 05:09:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:09:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2488635550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.822 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.993 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.994 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4504MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.994 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:09:08 np0005604790 nova_compute[252672]: 2026-02-02 10:09:08.994 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:09:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:09 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.096 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.097 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.118 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:09:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:09:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731748324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.656 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.662 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.692 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.727 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:09:09 np0005604790 nova_compute[252672]: 2026-02-02 10:09:09.727 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:09:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:09 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784002d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb  2 05:09:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:10.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:09:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8845 writes, 33K keys, 8845 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8845 writes, 2156 syncs, 4.10 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1978 writes, 6320 keys, 1978 commit groups, 1.0 writes per commit group, ingest: 6.41 MB, 0.01 MB/s#012Interval WAL: 1978 writes, 845 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 05:09:10 np0005604790 nova_compute[252672]: 2026-02-02 10:09:10.805 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:11 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.338 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.728 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.729 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.729 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.762 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.765 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.765 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.766 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:11 np0005604790 nova_compute[252672]: 2026-02-02 10:09:11.766 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:09:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:11 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:12 np0005604790 nova_compute[252672]: 2026-02-02 10:09:12.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:12 np0005604790 nova_compute[252672]: 2026-02-02 10:09:12.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb  2 05:09:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:12.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:13 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:13 np0005604790 nova_compute[252672]: 2026-02-02 10:09:13.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:13 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784002d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784002d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:14.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:09:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:14.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:14] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:09:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:14] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Feb  2 05:09:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:15 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784002d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:15 np0005604790 nova_compute[252672]: 2026-02-02 10:09:15.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:09:15 np0005604790 nova_compute[252672]: 2026-02-02 10:09:15.810 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:15 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784002d20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:16 np0005604790 nova_compute[252672]: 2026-02-02 10:09:16.340 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:09:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:16.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:17 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:17.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:09:17
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:09:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:09:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:09:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:09:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:17 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:18.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Feb  2 05:09:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:18.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:19 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:19 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:20.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:09:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:20.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:20 np0005604790 nova_compute[252672]: 2026-02-02 10:09:20.813 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:21 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:21 np0005604790 nova_compute[252672]: 2026-02-02 10:09:21.342 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.446680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961446729, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2105, "num_deletes": 251, "total_data_size": 3902078, "memory_usage": 3982272, "flush_reason": "Manual Compaction"}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961487114, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3781117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24782, "largest_seqno": 26886, "table_properties": {"data_size": 3772144, "index_size": 5467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19281, "raw_average_key_size": 20, "raw_value_size": 3753887, "raw_average_value_size": 3918, "num_data_blocks": 242, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026763, "oldest_key_time": 1770026763, "file_creation_time": 1770026961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 40488 microseconds, and 5428 cpu microseconds.
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.487168) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3781117 bytes OK
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.487190) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.490121) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.490161) EVENT_LOG_v1 {"time_micros": 1770026961490153, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.490183) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3893584, prev total WAL file size 3893584, number of live WAL files 2.
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.491111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3692KB)], [56(11MB)]
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961491177, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16145386, "oldest_snapshot_seqno": -1}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5846 keys, 13998489 bytes, temperature: kUnknown
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961644659, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 13998489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959072, "index_size": 23691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148650, "raw_average_key_size": 25, "raw_value_size": 13853327, "raw_average_value_size": 2369, "num_data_blocks": 966, "num_entries": 5846, "num_filter_entries": 5846, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770026961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.645042) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13998489 bytes
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.646746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.1 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.8 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 6362, records dropped: 516 output_compression: NoCompression
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.646781) EVENT_LOG_v1 {"time_micros": 1770026961646763, "job": 30, "event": "compaction_finished", "compaction_time_micros": 153587, "compaction_time_cpu_micros": 38214, "output_level": 6, "num_output_files": 1, "total_output_size": 13998489, "num_input_records": 6362, "num_output_records": 5846, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961647583, "job": 30, "event": "table_file_deletion", "file_number": 58}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770026961650391, "job": 30, "event": "table_file_deletion", "file_number": 56}
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.491037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.650533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.650544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.650550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.650554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:09:21.650557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:09:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:21 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:09:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:22.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:23 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:23 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:24.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:09:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:24.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:09:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:09:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:25 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:25 np0005604790 nova_compute[252672]: 2026-02-02 10:09:25.817 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:25 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:26.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:26 np0005604790 nova_compute[252672]: 2026-02-02 10:09:26.343 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:09:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:26.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:27 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:27.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:27 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb  2 05:09:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:28.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:29 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:29 np0005604790 podman[267078]: 2026-02-02 10:09:29.406566486 +0000 UTC m=+0.113000230 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:09:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:29 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27780016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:30.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb  2 05:09:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:30.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:30 np0005604790 nova_compute[252672]: 2026-02-02 10:09:30.818 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:31 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:31 np0005604790 nova_compute[252672]: 2026-02-02 10:09:31.355 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:31 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:09:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:09:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb  2 05:09:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:32.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:33 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:33 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:34.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb  2 05:09:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:34.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:34] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:09:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:34] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:09:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:35 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:35 np0005604790 nova_compute[252672]: 2026-02-02 10:09:35.842 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:35 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:36 np0005604790 nova_compute[252672]: 2026-02-02 10:09:36.358 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb  2 05:09:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:36.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:37 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:37.145Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:37 np0005604790 podman[267113]: 2026-02-02 10:09:37.366499616 +0000 UTC m=+0.070118355 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:09:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:37 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:38.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:09:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb  2 05:09:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:38.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.082534616 +0000 UTC m=+0.039525006 container create 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:09:39 np0005604790 systemd[1]: Started libpod-conmon-6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b.scope.
Feb  2 05:09:39 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.065198708 +0000 UTC m=+0.022189098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.168824838 +0000 UTC m=+0.125815298 container init 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.177929099 +0000 UTC m=+0.134919509 container start 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.182106619 +0000 UTC m=+0.139097009 container attach 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb  2 05:09:39 np0005604790 festive_clarke[267323]: 167 167
Feb  2 05:09:39 np0005604790 systemd[1]: libpod-6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b.scope: Deactivated successfully.
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.185823578 +0000 UTC m=+0.142813988 container died 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:09:39 np0005604790 systemd[1]: var-lib-containers-storage-overlay-77e087fa361e252680e5abad8453de976e599f225dc1a753ad59fa543f291eb5-merged.mount: Deactivated successfully.
Feb  2 05:09:39 np0005604790 podman[267307]: 2026-02-02 10:09:39.236021265 +0000 UTC m=+0.193011675 container remove 6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb  2 05:09:39 np0005604790 systemd[1]: libpod-conmon-6ecef9521f2130490546755436ba4e365ff6f40f3f70f42534d53dff97f0860b.scope: Deactivated successfully.
Feb  2 05:09:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:39 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:39 np0005604790 podman[267347]: 2026-02-02 10:09:39.408682351 +0000 UTC m=+0.059202427 container create 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:09:39 np0005604790 systemd[1]: Started libpod-conmon-7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f.scope.
Feb  2 05:09:39 np0005604790 podman[267347]: 2026-02-02 10:09:39.383599788 +0000 UTC m=+0.034119914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:39 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8e781310e803c1a22bd196a3b3480ee7a8a48aacd740d3a21882fbea03540/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8e781310e803c1a22bd196a3b3480ee7a8a48aacd740d3a21882fbea03540/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8e781310e803c1a22bd196a3b3480ee7a8a48aacd740d3a21882fbea03540/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:39 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e8e781310e803c1a22bd196a3b3480ee7a8a48aacd740d3a21882fbea03540/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:39 np0005604790 podman[267347]: 2026-02-02 10:09:39.527800021 +0000 UTC m=+0.178320107 container init 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:09:39 np0005604790 podman[267347]: 2026-02-02 10:09:39.53873256 +0000 UTC m=+0.189252626 container start 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:09:39 np0005604790 podman[267347]: 2026-02-02 10:09:39.542727076 +0000 UTC m=+0.193247202 container attach 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:09:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]: [
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:    {
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "available": false,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "being_replaced": false,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "ceph_device_lvm": false,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "lsm_data": {},
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "lvs": [],
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "path": "/dev/sr0",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "rejected_reasons": [
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "Insufficient space (<5GB)",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "Has a FileSystem"
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        ],
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        "sys_api": {
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "actuators": null,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "device_nodes": [
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:                "sr0"
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            ],
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "devname": "sr0",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "human_readable_size": "482.00 KB",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "id_bus": "ata",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "model": "QEMU DVD-ROM",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "nr_requests": "2",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "parent": "/dev/sr0",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "partitions": {},
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "path": "/dev/sr0",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "removable": "1",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "rev": "2.5+",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "ro": "0",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "rotational": "1",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "sas_address": "",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "sas_device_handle": "",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "scheduler_mode": "mq-deadline",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "sectors": 0,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "sectorsize": "2048",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "size": 493568.0,
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "support_discard": "2048",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "type": "disk",
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:            "vendor": "QEMU"
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:        }
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]:    }
Feb  2 05:09:40 np0005604790 thirsty_hawking[267364]: ]
Feb  2 05:09:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:40.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:40 np0005604790 systemd[1]: libpod-7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f.scope: Deactivated successfully.
Feb  2 05:09:40 np0005604790 podman[268674]: 2026-02-02 10:09:40.370056744 +0000 UTC m=+0.029430490 container died 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 05:09:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-82e8e781310e803c1a22bd196a3b3480ee7a8a48aacd740d3a21882fbea03540-merged.mount: Deactivated successfully.
Feb  2 05:09:40 np0005604790 podman[268674]: 2026-02-02 10:09:40.410965125 +0000 UTC m=+0.070338841 container remove 7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:09:40 np0005604790 systemd[1]: libpod-conmon-7d20f8baa29d9d442839c1e0cbbbae063ec137800f2f82297be9553c0c9c109f.scope: Deactivated successfully.
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:09:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:09:40 np0005604790 nova_compute[252672]: 2026-02-02 10:09:40.877 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.352771841 +0000 UTC m=+0.060782038 container create 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Feb  2 05:09:41 np0005604790 nova_compute[252672]: 2026-02-02 10:09:41.360 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:41 np0005604790 systemd[1]: Started libpod-conmon-1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c.scope.
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.326272331 +0000 UTC m=+0.034282578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.45934113 +0000 UTC m=+0.167351367 container init 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.466255432 +0000 UTC m=+0.174265629 container start 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.470393882 +0000 UTC m=+0.178404069 container attach 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 05:09:41 np0005604790 magical_khayyam[268797]: 167 167
Feb  2 05:09:41 np0005604790 systemd[1]: libpod-1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c.scope: Deactivated successfully.
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.472233411 +0000 UTC m=+0.180243608 container died 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:41 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:09:41 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bc0eee51a6be7e181d0b7a34ae8a5aeef2cc6c62ef5f32058845069e6ba054fe-merged.mount: Deactivated successfully.
Feb  2 05:09:41 np0005604790 podman[268780]: 2026-02-02 10:09:41.509563798 +0000 UTC m=+0.217573955 container remove 1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:09:41 np0005604790 systemd[1]: libpod-conmon-1276ffbf8e3be78afe1eb79264f73282470c03ec3dd0fb30d615c786cc98297c.scope: Deactivated successfully.
Feb  2 05:09:41 np0005604790 podman[268821]: 2026-02-02 10:09:41.676963815 +0000 UTC m=+0.063273035 container create 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 05:09:41 np0005604790 systemd[1]: Started libpod-conmon-919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15.scope.
Feb  2 05:09:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:41 np0005604790 podman[268821]: 2026-02-02 10:09:41.649117198 +0000 UTC m=+0.035426498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:41 np0005604790 podman[268821]: 2026-02-02 10:09:41.766267896 +0000 UTC m=+0.152577196 container init 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 05:09:41 np0005604790 podman[268821]: 2026-02-02 10:09:41.776465866 +0000 UTC m=+0.162775116 container start 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 05:09:41 np0005604790 podman[268821]: 2026-02-02 10:09:41.780216235 +0000 UTC m=+0.166525485 container attach 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Feb  2 05:09:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:42 np0005604790 admiring_chatterjee[268839]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:09:42 np0005604790 admiring_chatterjee[268839]: --> All data devices are unavailable
Feb  2 05:09:42 np0005604790 systemd[1]: libpod-919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15.scope: Deactivated successfully.
Feb  2 05:09:42 np0005604790 podman[268821]: 2026-02-02 10:09:42.093628243 +0000 UTC m=+0.479937483 container died 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:09:42 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a3e1fb9e6623bfe390ed6531af0580c74469d7bd637e3f1de297d58a22e3a5d6-merged.mount: Deactivated successfully.
Feb  2 05:09:42 np0005604790 podman[268821]: 2026-02-02 10:09:42.148372181 +0000 UTC m=+0.534681431 container remove 919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 05:09:42 np0005604790 systemd[1]: libpod-conmon-919ee5ea910f552251a4050548fa730ce88e7b3fe0cb97a425010e51518c4f15.scope: Deactivated successfully.
Feb  2 05:09:42 np0005604790 ovn_controller[154631]: 2026-02-02T10:09:42Z|00077|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb  2 05:09:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:42.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb  2 05:09:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:42.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.799347646 +0000 UTC m=+0.036845375 container create d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:09:42 np0005604790 systemd[1]: Started libpod-conmon-d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0.scope.
Feb  2 05:09:42 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.781955006 +0000 UTC m=+0.019452785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.885772101 +0000 UTC m=+0.123269880 container init d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.894828591 +0000 UTC m=+0.132326350 container start d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.899900965 +0000 UTC m=+0.137398764 container attach d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 05:09:42 np0005604790 beautiful_dubinsky[268972]: 167 167
Feb  2 05:09:42 np0005604790 systemd[1]: libpod-d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0.scope: Deactivated successfully.
Feb  2 05:09:42 np0005604790 conmon[268972]: conmon d83733395f4a25241e1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0.scope/container/memory.events
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.903566722 +0000 UTC m=+0.141064461 container died d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:09:42 np0005604790 systemd[1]: var-lib-containers-storage-overlay-40420de7ea50e5e315637bab6df737eaca63032aa07e6c0970ffaad2a3a16267-merged.mount: Deactivated successfully.
Feb  2 05:09:42 np0005604790 podman[268956]: 2026-02-02 10:09:42.942689157 +0000 UTC m=+0.180186896 container remove d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 05:09:42 np0005604790 systemd[1]: libpod-conmon-d83733395f4a25241e1b27c5bc78bf2e486ee3c5e1b31cbfc0a876c7be6496b0.scope: Deactivated successfully.
Feb  2 05:09:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:43 np0005604790 podman[268998]: 2026-02-02 10:09:43.128809789 +0000 UTC m=+0.058571000 container create 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 05:09:43 np0005604790 systemd[1]: Started libpod-conmon-26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e.scope.
Feb  2 05:09:43 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:43 np0005604790 podman[268998]: 2026-02-02 10:09:43.106078007 +0000 UTC m=+0.035839238 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f04ca475f87abf21541e9482f679bbe92da6412fc0d37231e535e7ae8fe8494/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f04ca475f87abf21541e9482f679bbe92da6412fc0d37231e535e7ae8fe8494/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f04ca475f87abf21541e9482f679bbe92da6412fc0d37231e535e7ae8fe8494/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f04ca475f87abf21541e9482f679bbe92da6412fc0d37231e535e7ae8fe8494/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:43 np0005604790 podman[268998]: 2026-02-02 10:09:43.219207689 +0000 UTC m=+0.148968930 container init 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 05:09:43 np0005604790 podman[268998]: 2026-02-02 10:09:43.230114618 +0000 UTC m=+0.159875829 container start 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:09:43 np0005604790 podman[268998]: 2026-02-02 10:09:43.235069339 +0000 UTC m=+0.164839600 container attach 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]: {
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:    "1": [
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:        {
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "devices": [
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "/dev/loop3"
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            ],
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "lv_name": "ceph_lv0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "lv_size": "21470642176",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "name": "ceph_lv0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "tags": {
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.cluster_name": "ceph",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.crush_device_class": "",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.encrypted": "0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.osd_id": "1",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.type": "block",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.vdo": "0",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:                "ceph.with_tpm": "0"
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            },
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "type": "block",
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:            "vg_name": "ceph_vg0"
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:        }
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]:    ]
Feb  2 05:09:43 np0005604790 interesting_clarke[269014]: }
Feb  2 05:09:43 np0005604790 systemd[1]: libpod-26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e.scope: Deactivated successfully.
Feb  2 05:09:43 np0005604790 podman[269023]: 2026-02-02 10:09:43.604319353 +0000 UTC m=+0.041460367 container died 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:09:43 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4f04ca475f87abf21541e9482f679bbe92da6412fc0d37231e535e7ae8fe8494-merged.mount: Deactivated successfully.
Feb  2 05:09:43 np0005604790 podman[269023]: 2026-02-02 10:09:43.639571376 +0000 UTC m=+0.076712390 container remove 26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_clarke, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:09:43 np0005604790 systemd[1]: libpod-conmon-26eeae64346303689a25a6d0ae43eb1e092709b6a497d69452c880df9f9e326e.scope: Deactivated successfully.
Feb  2 05:09:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.223030854 +0000 UTC m=+0.039618919 container create 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 05:09:44 np0005604790 systemd[1]: Started libpod-conmon-46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526.scope.
Feb  2 05:09:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.204540155 +0000 UTC m=+0.021128210 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.307456697 +0000 UTC m=+0.124044822 container init 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 05:09:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:44 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.314719949 +0000 UTC m=+0.131307994 container start 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.317933354 +0000 UTC m=+0.134521389 container attach 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:09:44 np0005604790 eager_proskuriakova[269151]: 167 167
Feb  2 05:09:44 np0005604790 systemd[1]: libpod-46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526.scope: Deactivated successfully.
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.322735581 +0000 UTC m=+0.139323626 container died 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Feb  2 05:09:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:44.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:44 np0005604790 systemd[1]: var-lib-containers-storage-overlay-633b35b6e1335b02a429708cc0cacb33d4d4ac19cec3b55f5f3f5530dbea82dc-merged.mount: Deactivated successfully.
Feb  2 05:09:44 np0005604790 podman[269134]: 2026-02-02 10:09:44.369848707 +0000 UTC m=+0.186436752 container remove 46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:09:44 np0005604790 systemd[1]: libpod-conmon-46d752e8a93657abf04b11401e436449734d1647916d2a2e193133b69ef2a526.scope: Deactivated successfully.
Feb  2 05:09:44 np0005604790 podman[269174]: 2026-02-02 10:09:44.548559953 +0000 UTC m=+0.060687236 container create 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 05:09:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/100944 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:09:44 np0005604790 systemd[1]: Started libpod-conmon-0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16.scope.
Feb  2 05:09:44 np0005604790 podman[269174]: 2026-02-02 10:09:44.523847669 +0000 UTC m=+0.035974962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:09:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:09:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecba4cd8d43a73a37f6ab8b66963de7ff13cb3e8ad3e9c94f667829c92ab30a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecba4cd8d43a73a37f6ab8b66963de7ff13cb3e8ad3e9c94f667829c92ab30a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecba4cd8d43a73a37f6ab8b66963de7ff13cb3e8ad3e9c94f667829c92ab30a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ecba4cd8d43a73a37f6ab8b66963de7ff13cb3e8ad3e9c94f667829c92ab30a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:09:44 np0005604790 podman[269174]: 2026-02-02 10:09:44.64413582 +0000 UTC m=+0.156263153 container init 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 05:09:44 np0005604790 podman[269174]: 2026-02-02 10:09:44.660406341 +0000 UTC m=+0.172533624 container start 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:09:44 np0005604790 podman[269174]: 2026-02-02 10:09:44.664344495 +0000 UTC m=+0.176471768 container attach 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:09:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb  2 05:09:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:44.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:44] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:09:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:44] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Feb  2 05:09:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:45 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:45 np0005604790 lvm[269264]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:09:45 np0005604790 lvm[269264]: VG ceph_vg0 finished
Feb  2 05:09:45 np0005604790 stoic_dirac[269190]: {}
Feb  2 05:09:45 np0005604790 systemd[1]: libpod-0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16.scope: Deactivated successfully.
Feb  2 05:09:45 np0005604790 systemd[1]: libpod-0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16.scope: Consumed 1.025s CPU time.
Feb  2 05:09:45 np0005604790 podman[269174]: 2026-02-02 10:09:45.329398322 +0000 UTC m=+0.841525595 container died 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:09:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0ecba4cd8d43a73a37f6ab8b66963de7ff13cb3e8ad3e9c94f667829c92ab30a-merged.mount: Deactivated successfully.
Feb  2 05:09:45 np0005604790 podman[269174]: 2026-02-02 10:09:45.380791251 +0000 UTC m=+0.892918524 container remove 0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:09:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:45.381 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:09:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:45.383 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:09:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:45.384 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:09:45 np0005604790 systemd[1]: libpod-conmon-0fe29e6cca7aff8771c6c2f53a5c6a16ce014c967e29e42605460115736e8b16.scope: Deactivated successfully.
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:45 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:09:45 np0005604790 nova_compute[252672]: 2026-02-02 10:09:45.879 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:45 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:46 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:46.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:46 np0005604790 nova_compute[252672]: 2026-02-02 10:09:46.362 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:09:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:46.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:47 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:47.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:09:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:47.147Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:09:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:09:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:09:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:47 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778003430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:48 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:09:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:48.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:50 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 05:09:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:09:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:09:50 np0005604790 nova_compute[252672]: 2026-02-02 10:09:50.926 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:51 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:51 np0005604790 nova_compute[252672]: 2026-02-02 10:09:51.364 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:51 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:52 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:52.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 05:09:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:52.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:53 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:53 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:54 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0023b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:54.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb  2 05:09:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:09:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:09:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:09:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:09:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722678984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:09:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:09:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722678984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:09:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:55 np0005604790 nova_compute[252672]: 2026-02-02 10:09:55.067 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:55.069 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:09:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:55.070 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:09:55 np0005604790 nova_compute[252672]: 2026-02-02 10:09:55.971 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:56 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:56.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:56 np0005604790 nova_compute[252672]: 2026-02-02 10:09:56.367 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:09:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb  2 05:09:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:09:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:56.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:09:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:57 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0023b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:57 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:09:57.073 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:09:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:09:57.149Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:09:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:09:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:57 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:09:58.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb  2 05:09:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:09:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:09:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:09:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:09:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:59 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:09:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:09:59 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [INF] : overall HEALTH_OK
Feb  2 05:10:00 np0005604790 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  2 05:10:00 np0005604790 ceph-mon[74489]: overall HEALTH_OK
Feb  2 05:10:00 np0005604790 podman[269325]: 2026-02-02 10:10:00.133264503 +0000 UTC m=+0.109667401 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb  2 05:10:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:00 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:00.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb  2 05:10:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:00.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:01 np0005604790 nova_compute[252672]: 2026-02-02 10:10:01.009 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794003220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:01 np0005604790 nova_compute[252672]: 2026-02-02 10:10:01.368 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:10:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:10:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c001a70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:02.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb  2 05:10:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:02.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:03 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794003220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb  2 05:10:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:04.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:10:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:10:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:05 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003440 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003fd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:06 np0005604790 nova_compute[252672]: 2026-02-02 10:10:06.012 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794003220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:06.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:06 np0005604790 nova_compute[252672]: 2026-02-02 10:10:06.372 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:10:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:07 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:07.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:10:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:07.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:10:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c0035c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.322 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.323 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.323 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.324 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.324 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:10:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:08.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:08 np0005604790 podman[269386]: 2026-02-02 10:10:08.353869885 +0000 UTC m=+0.068330858 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 05:10:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:10:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:10:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215496927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.804 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.989 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.991 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4527MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.992 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:10:08 np0005604790 nova_compute[252672]: 2026-02-02 10:10:08.992 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:10:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:09 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.337 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.338 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.361 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:10:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:10:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018837737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.802 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.807 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.827 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.829 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:10:09 np0005604790 nova_compute[252672]: 2026-02-02 10:10:09.829 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:10:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:10.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:10:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:10.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:11 np0005604790 nova_compute[252672]: 2026-02-02 10:10:11.059 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:11 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:11 np0005604790 nova_compute[252672]: 2026-02-02 10:10:11.374 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000052s ======
Feb  2 05:10:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:12.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Feb  2 05:10:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:12 np0005604790 nova_compute[252672]: 2026-02-02 10:10:12.830 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:12 np0005604790 nova_compute[252672]: 2026-02-02 10:10:12.830 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:10:12 np0005604790 nova_compute[252672]: 2026-02-02 10:10:12.830 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:10:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:13 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:10:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:13.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.239 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.239 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.240 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:13 np0005604790 nova_compute[252672]: 2026-02-02 10:10:13.281 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:10:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:14 np0005604790 nova_compute[252672]: 2026-02-02 10:10:14.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:14 np0005604790 nova_compute[252672]: 2026-02-02 10:10:14.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:14.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:10:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:10:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:14] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:10:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:15 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:15.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:15 np0005604790 nova_compute[252672]: 2026-02-02 10:10:15.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:16 np0005604790 nova_compute[252672]: 2026-02-02 10:10:16.062 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:16.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:16 np0005604790 nova_compute[252672]: 2026-02-02 10:10:16.376 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:10:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:17 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:17.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:10:17
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', '.nfs', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data']
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:10:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:10:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:10:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:17 np0005604790 nova_compute[252672]: 2026-02-02 10:10:17.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:10:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:10:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:18.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:10:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:19 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:10:21 np0005604790 nova_compute[252672]: 2026-02-02 10:10:21.063 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:21 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:21 np0005604790 nova_compute[252672]: 2026-02-02 10:10:21.377 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:22.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:10:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:23 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:23.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:10:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:10:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:10:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:25 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:25.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Feb  2 05:10:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Feb  2 05:10:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:26 np0005604790 nova_compute[252672]: 2026-02-02 10:10:26.065 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:26 np0005604790 nova_compute[252672]: 2026-02-02 10:10:26.379 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 05:10:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:27 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:27.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:10:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:10:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:10:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 273 op/s
Feb  2 05:10:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:29 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:30.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:30 np0005604790 podman[269498]: 2026-02-02 10:10:30.378070746 +0000 UTC m=+0.095610700 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 05:10:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 245 op/s
Feb  2 05:10:31 np0005604790 nova_compute[252672]: 2026-02-02 10:10:31.067 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:31 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:31.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:31 np0005604790 nova_compute[252672]: 2026-02-02 10:10:31.381 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:10:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:10:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:32.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 245 op/s
Feb  2 05:10:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:33 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:34.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 308 op/s
Feb  2 05:10:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:34] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb  2 05:10:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:34] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb  2 05:10:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:35 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:36 np0005604790 nova_compute[252672]: 2026-02-02 10:10:36.087 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:36.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:36 np0005604790 nova_compute[252672]: 2026-02-02 10:10:36.383 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 2.1 MiB/s wr, 235 op/s
Feb  2 05:10:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:37 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:37.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:10:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:38.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 2.1 MiB/s wr, 235 op/s
Feb  2 05:10:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:39.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:39 np0005604790 podman[269534]: 2026-02-02 10:10:39.344224431 +0000 UTC m=+0.061938659 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:10:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:10:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:41 np0005604790 nova_compute[252672]: 2026-02-02 10:10:41.113 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:41 np0005604790 nova_compute[252672]: 2026-02-02 10:10:41.385 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:10:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:43.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:44 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:44 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:44.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb  2 05:10:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:44] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb  2 05:10:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:44] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Feb  2 05:10:45 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:45 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:45.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:10:45.382 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:10:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:10:45.383 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:10:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:10:45.383 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:10:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:46 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:46 np0005604790 nova_compute[252672]: 2026-02-02 10:10:46.151 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:46 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:46 np0005604790 nova_compute[252672]: 2026-02-02 10:10:46.388 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:46.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Feb  2 05:10:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:47 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:47.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:10:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:10:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:47.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:10:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:10:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:48 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004360 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:48 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:10:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:10:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:48.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Feb  2 05:10:48 np0005604790 podman[269763]: 2026-02-02 10:10:48.971812189 +0000 UTC m=+0.044430136 container create eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:10:49 np0005604790 systemd[1]: Started libpod-conmon-eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab.scope.
Feb  2 05:10:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:48.952820367 +0000 UTC m=+0.025438344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:49.067045817 +0000 UTC m=+0.139663834 container init eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:49.072964984 +0000 UTC m=+0.145582951 container start eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:49.077280068 +0000 UTC m=+0.149898015 container attach eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:10:49 np0005604790 intelligent_blackwell[269779]: 167 167
Feb  2 05:10:49 np0005604790 systemd[1]: libpod-eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab.scope: Deactivated successfully.
Feb  2 05:10:49 np0005604790 conmon[269779]: conmon eafec6909686f933d243 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab.scope/container/memory.events
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:49.081418827 +0000 UTC m=+0.154036804 container died eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:10:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:49 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-cc510f7f88dd138d7b4123fb61ac87c614561f5ffc66ba0b9fcb2b413bad4fe8-merged.mount: Deactivated successfully.
Feb  2 05:10:49 np0005604790 podman[269763]: 2026-02-02 10:10:49.154559041 +0000 UTC m=+0.227177008 container remove eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_blackwell, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 05:10:49 np0005604790 systemd[1]: libpod-conmon-eafec6909686f933d2434a3077aab6569f4c550994cf11e8c1064edf2acadaab.scope: Deactivated successfully.
Feb  2 05:10:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.348812238 +0000 UTC m=+0.075796095 container create 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb  2 05:10:49 np0005604790 systemd[1]: Started libpod-conmon-401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c.scope.
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.317825019 +0000 UTC m=+0.044808936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:49 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:49 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.447971401 +0000 UTC m=+0.174955308 container init 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.463151092 +0000 UTC m=+0.190134959 container start 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.467971929 +0000 UTC m=+0.194955786 container attach 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 05:10:49 np0005604790 infallible_gates[269822]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:10:49 np0005604790 infallible_gates[269822]: --> All data devices are unavailable
Feb  2 05:10:49 np0005604790 systemd[1]: libpod-401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c.scope: Deactivated successfully.
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.796182769 +0000 UTC m=+0.523166636 container died 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:10:49 np0005604790 systemd[1]: var-lib-containers-storage-overlay-9bd11c5df8d4b255eb8f4bbeeb96313e59e591cf122503e7a07bd86dd260580e-merged.mount: Deactivated successfully.
Feb  2 05:10:49 np0005604790 podman[269805]: 2026-02-02 10:10:49.856311069 +0000 UTC m=+0.583294936 container remove 401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 05:10:49 np0005604790 systemd[1]: libpod-conmon-401da80446c1751d98be78b69e32441d63488a2f6331d422ddddf6d99cffbf8c.scope: Deactivated successfully.
Feb  2 05:10:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:50 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:50 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:50 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:10:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.466855645 +0000 UTC m=+0.071410810 container create 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:10:50 np0005604790 systemd[1]: Started libpod-conmon-07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec.scope.
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.433372589 +0000 UTC m=+0.037927814 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.562479984 +0000 UTC m=+0.167035209 container init 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.569278163 +0000 UTC m=+0.173833338 container start 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.573644279 +0000 UTC m=+0.178199424 container attach 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 05:10:50 np0005604790 clever_shtern[269957]: 167 167
Feb  2 05:10:50 np0005604790 systemd[1]: libpod-07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec.scope: Deactivated successfully.
Feb  2 05:10:50 np0005604790 conmon[269957]: conmon 07a4f99878b182e919e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec.scope/container/memory.events
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.57557414 +0000 UTC m=+0.180129335 container died 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb  2 05:10:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4448c29f90a76eb399d73e3d43b4830cf4c6cafdd4e5e8aea8d898a41bf7aec9-merged.mount: Deactivated successfully.
Feb  2 05:10:50 np0005604790 podman[269941]: 2026-02-02 10:10:50.61945373 +0000 UTC m=+0.224008875 container remove 07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:10:50 np0005604790 systemd[1]: libpod-conmon-07a4f99878b182e919e86ac0bbe99a05fdd924a93f5fc95ff9ab194821e247ec.scope: Deactivated successfully.
Feb  2 05:10:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Feb  2 05:10:50 np0005604790 podman[269981]: 2026-02-02 10:10:50.814381115 +0000 UTC m=+0.069389356 container create 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:10:50 np0005604790 systemd[1]: Started libpod-conmon-9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0.scope.
Feb  2 05:10:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:50 np0005604790 podman[269981]: 2026-02-02 10:10:50.788631554 +0000 UTC m=+0.043639845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1bfe6bd83d5ee133de77ed2839631514833457adac42cb7813f73ad9a09970/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1bfe6bd83d5ee133de77ed2839631514833457adac42cb7813f73ad9a09970/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1bfe6bd83d5ee133de77ed2839631514833457adac42cb7813f73ad9a09970/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1bfe6bd83d5ee133de77ed2839631514833457adac42cb7813f73ad9a09970/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:50 np0005604790 podman[269981]: 2026-02-02 10:10:50.913939258 +0000 UTC m=+0.168947509 container init 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:10:50 np0005604790 podman[269981]: 2026-02-02 10:10:50.924497327 +0000 UTC m=+0.179505558 container start 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:10:50 np0005604790 podman[269981]: 2026-02-02 10:10:50.928455622 +0000 UTC m=+0.183463933 container attach 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:10:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:51 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:51 np0005604790 nova_compute[252672]: 2026-02-02 10:10:51.204 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:51 np0005604790 quirky_keller[269997]: {
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:    "1": [
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:        {
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "devices": [
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "/dev/loop3"
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            ],
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "lv_name": "ceph_lv0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "lv_size": "21470642176",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "name": "ceph_lv0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "tags": {
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.cluster_name": "ceph",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.crush_device_class": "",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.encrypted": "0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.osd_id": "1",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.type": "block",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.vdo": "0",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:                "ceph.with_tpm": "0"
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            },
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "type": "block",
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:            "vg_name": "ceph_vg0"
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:        }
Feb  2 05:10:51 np0005604790 quirky_keller[269997]:    ]
Feb  2 05:10:51 np0005604790 quirky_keller[269997]: }
Feb  2 05:10:51 np0005604790 systemd[1]: libpod-9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0.scope: Deactivated successfully.
Feb  2 05:10:51 np0005604790 podman[269981]: 2026-02-02 10:10:51.28875413 +0000 UTC m=+0.543762361 container died 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:10:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4b1bfe6bd83d5ee133de77ed2839631514833457adac42cb7813f73ad9a09970-merged.mount: Deactivated successfully.
Feb  2 05:10:51 np0005604790 podman[269981]: 2026-02-02 10:10:51.333689658 +0000 UTC m=+0.588697859 container remove 9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:10:51 np0005604790 systemd[1]: libpod-conmon-9e59e3bfa1325af6e57ab29cfbe031cc5c6ad8ba691ac7647de8f2819fb7d0c0.scope: Deactivated successfully.
Feb  2 05:10:51 np0005604790 nova_compute[252672]: 2026-02-02 10:10:51.389 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:51 np0005604790 podman[270114]: 2026-02-02 10:10:51.972669605 +0000 UTC m=+0.051810731 container create 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:10:52 np0005604790 systemd[1]: Started libpod-conmon-07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d.scope.
Feb  2 05:10:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:52 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:51.952119081 +0000 UTC m=+0.031260227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:52.053749069 +0000 UTC m=+0.132890235 container init 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:52.06020913 +0000 UTC m=+0.139350256 container start 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:52.063761664 +0000 UTC m=+0.142902850 container attach 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 05:10:52 np0005604790 eager_curie[270130]: 167 167
Feb  2 05:10:52 np0005604790 systemd[1]: libpod-07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d.scope: Deactivated successfully.
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:52.065410757 +0000 UTC m=+0.144551903 container died 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 05:10:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c2cfcbf2d7241cf894f21406f82211cf9feb549eee47eb951be173462ec4c7f9-merged.mount: Deactivated successfully.
Feb  2 05:10:52 np0005604790 podman[270114]: 2026-02-02 10:10:52.1086208 +0000 UTC m=+0.187761906 container remove 07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 05:10:52 np0005604790 systemd[1]: libpod-conmon-07d05bf98ed9cd9c15129c9c2190ebaf3328cac5dd23bf2763c0b14b71bb821d.scope: Deactivated successfully.
Feb  2 05:10:52 np0005604790 podman[270154]: 2026-02-02 10:10:52.253097501 +0000 UTC m=+0.058189970 container create 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 05:10:52 np0005604790 systemd[1]: Started libpod-conmon-09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a.scope.
Feb  2 05:10:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:10:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4e10f8c0c7c2584156a5781366fb87d1d26bf0905f89a98b591422d103040/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4e10f8c0c7c2584156a5781366fb87d1d26bf0905f89a98b591422d103040/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4e10f8c0c7c2584156a5781366fb87d1d26bf0905f89a98b591422d103040/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4e10f8c0c7c2584156a5781366fb87d1d26bf0905f89a98b591422d103040/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:10:52 np0005604790 podman[270154]: 2026-02-02 10:10:52.315933692 +0000 UTC m=+0.121026191 container init 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb  2 05:10:52 np0005604790 podman[270154]: 2026-02-02 10:10:52.22396541 +0000 UTC m=+0.029057989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:10:52 np0005604790 podman[270154]: 2026-02-02 10:10:52.32076807 +0000 UTC m=+0.125860549 container start 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:10:52 np0005604790 podman[270154]: 2026-02-02 10:10:52.323900223 +0000 UTC m=+0.128992732 container attach 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 05:10:52 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:52 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:52.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Feb  2 05:10:52 np0005604790 lvm[270245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:10:52 np0005604790 lvm[270245]: VG ceph_vg0 finished
Feb  2 05:10:52 np0005604790 thirsty_jones[270171]: {}
Feb  2 05:10:53 np0005604790 systemd[1]: libpod-09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a.scope: Deactivated successfully.
Feb  2 05:10:53 np0005604790 systemd[1]: libpod-09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a.scope: Consumed 1.001s CPU time.
Feb  2 05:10:53 np0005604790 podman[270154]: 2026-02-02 10:10:53.004657096 +0000 UTC m=+0.809749645 container died 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 05:10:53 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4ab4e10f8c0c7c2584156a5781366fb87d1d26bf0905f89a98b591422d103040-merged.mount: Deactivated successfully.
Feb  2 05:10:53 np0005604790 podman[270154]: 2026-02-02 10:10:53.051468324 +0000 UTC m=+0.856560813 container remove 09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 05:10:53 np0005604790 systemd[1]: libpod-conmon-09c137b901e760b5105c032a3f274a62c0f5f72f63543e3e0e44aa23204b915a.scope: Deactivated successfully.
Feb  2 05:10:53 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:53 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f277c003ee0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:10:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:10:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:54 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:10:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:54 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:10:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:54.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:10:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Feb  2 05:10:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:54] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:10:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:10:54] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:10:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:10:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470274605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:10:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:10:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470274605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:10:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:55 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:10:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:10:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:10:55.520 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:10:55 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:10:55.521 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:10:55 np0005604790 nova_compute[252672]: 2026-02-02 10:10:55.523 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:56 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:56 np0005604790 nova_compute[252672]: 2026-02-02 10:10:56.230 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:56 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:56 np0005604790 nova_compute[252672]: 2026-02-02 10:10:56.392 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:10:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:56.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Feb  2 05:10:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:57 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:57.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:10:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:57.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:10:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:10:57.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:10:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:57.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:10:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:58 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:10:58.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:10:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Feb  2 05:10:59 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:10:59 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:10:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:10:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:10:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:10:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:00 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:00 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:00 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:00.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:00 np0005604790 podman[270319]: 2026-02-02 10:11:00.537772366 +0000 UTC m=+0.093728489 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:11:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:11:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:01 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:01 np0005604790 nova_compute[252672]: 2026-02-02 10:11:01.276 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:01.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:01 np0005604790 nova_compute[252672]: 2026-02-02 10:11:01.394 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:11:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:11:02 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:02 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:02.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:02 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:02.523 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:11:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:11:03 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:03 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:04 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:04.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Feb  2 05:11:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:11:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:11:05 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:05 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004760 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:05.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27840021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:06 np0005604790 nova_compute[252672]: 2026-02-02 10:11:06.310 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:06 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:06 np0005604790 nova_compute[252672]: 2026-02-02 10:11:06.397 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Feb  2 05:11:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:07 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:07.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:11:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.441084) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067441207, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1312, "num_deletes": 255, "total_data_size": 2352310, "memory_usage": 2383024, "flush_reason": "Manual Compaction"}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067467407, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2262069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26888, "largest_seqno": 28198, "table_properties": {"data_size": 2256038, "index_size": 3230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 12928, "raw_average_key_size": 19, "raw_value_size": 2243786, "raw_average_value_size": 3358, "num_data_blocks": 143, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770026962, "oldest_key_time": 1770026962, "file_creation_time": 1770027067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 26455 microseconds, and 12793 cpu microseconds.
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.467549) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2262069 bytes OK
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.467589) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.469380) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.469401) EVENT_LOG_v1 {"time_micros": 1770027067469393, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.469434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2346543, prev total WAL file size 2346543, number of live WAL files 2.
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.470728) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2209KB)], [59(13MB)]
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067470814, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 16260558, "oldest_snapshot_seqno": -1}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5986 keys, 16112356 bytes, temperature: kUnknown
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067570956, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 16112356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16069905, "index_size": 26405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 152677, "raw_average_key_size": 25, "raw_value_size": 15959657, "raw_average_value_size": 2666, "num_data_blocks": 1081, "num_entries": 5986, "num_filter_entries": 5986, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.571275) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16112356 bytes
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.589819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.2 rd, 160.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.3 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 6514, records dropped: 528 output_compression: NoCompression
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.589856) EVENT_LOG_v1 {"time_micros": 1770027067589840, "job": 32, "event": "compaction_finished", "compaction_time_micros": 100244, "compaction_time_cpu_micros": 40834, "output_level": 6, "num_output_files": 1, "total_output_size": 16112356, "num_input_records": 6514, "num_output_records": 5986, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067590457, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027067593033, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.470541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.593238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.593350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.593354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.593356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:07 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:11:07.593359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:11:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.307 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.307 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.308 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.308 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.309 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:11:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:08 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:11:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237912221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:11:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.760 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.946 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.948 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4533MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.948 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:11:08 np0005604790 nova_compute[252672]: 2026-02-02 10:11:08.949 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.038 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.039 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.058 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:11:09 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:09 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004780 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:11:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846757801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.520 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.527 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.541 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.543 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:11:09 np0005604790 nova_compute[252672]: 2026-02-02 10:11:09.544 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:11:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:10 np0005604790 podman[270403]: 2026-02-02 10:11:10.342326683 +0000 UTC m=+0.064909858 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 05:11:10 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:10 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:11:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:11 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:11 np0005604790 nova_compute[252672]: 2026-02-02 10:11:11.341 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:11 np0005604790 nova_compute[252672]: 2026-02-02 10:11:11.398 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940047a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:12 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:12 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:12.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:12 np0005604790 nova_compute[252672]: 2026-02-02 10:11:12.545 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:12 np0005604790 nova_compute[252672]: 2026-02-02 10:11:12.545 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:11:13 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:13 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.317 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.317 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:13 np0005604790 nova_compute[252672]: 2026-02-02 10:11:13.317 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:11:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:13.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:14 np0005604790 nova_compute[252672]: 2026-02-02 10:11:14.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:14 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940047a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:14.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:11:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:11:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:11:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:15 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:15 np0005604790 nova_compute[252672]: 2026-02-02 10:11:15.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:15.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:16 np0005604790 nova_compute[252672]: 2026-02-02 10:11:16.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:16 np0005604790 nova_compute[252672]: 2026-02-02 10:11:16.344 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:16 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:16 np0005604790 nova_compute[252672]: 2026-02-02 10:11:16.399 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb  2 05:11:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:17 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940047c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:11:17
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'backups']
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:11:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:17.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:11:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:17.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:11:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:17.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:11:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:11:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:11:17 np0005604790 nova_compute[252672]: 2026-02-02 10:11:17.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:11:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:11:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:18 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:11:19 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:19 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:19.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f27940047e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:20 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:20 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:20.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:11:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:21 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:21.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:21 np0005604790 nova_compute[252672]: 2026-02-02 10:11:21.347 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:21 np0005604790 nova_compute[252672]: 2026-02-02 10:11:21.401 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:22 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb  2 05:11:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:23 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:23.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:24 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb  2 05:11:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:24] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb  2 05:11:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:24] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Feb  2 05:11:25 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:25 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2794004800 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:25.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:26 np0005604790 nova_compute[252672]: 2026-02-02 10:11:26.348 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:26 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:26 np0005604790 nova_compute[252672]: 2026-02-02 10:11:26.402 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:11:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:26.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:11:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb  2 05:11:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:27 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:27.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:11:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2774003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:28 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb  2 05:11:29 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:29 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:30 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:30 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:30.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb  2 05:11:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:31 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/101131 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:11:31 np0005604790 nova_compute[252672]: 2026-02-02 10:11:31.349 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:31 np0005604790 podman[270469]: 2026-02-02 10:11:31.367123777 +0000 UTC m=+0.081924607 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 05:11:31 np0005604790 nova_compute[252672]: 2026-02-02 10:11:31.403 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:11:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:11:32 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:32 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:32.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb  2 05:11:33 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:33 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:34 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:34.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb  2 05:11:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:34] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:34] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:35 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:35 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:35.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:36 np0005604790 nova_compute[252672]: 2026-02-02 10:11:36.351 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:36 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:36 np0005604790 nova_compute[252672]: 2026-02-02 10:11:36.405 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:36.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Feb  2 05:11:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:37 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00ac90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:37.158Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:11:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:37.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:38 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:38.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Feb  2 05:11:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:39 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00ac90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:40 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2780001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:40.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:40 np0005604790 podman[270531]: 2026-02-02 10:11:40.656607714 +0000 UTC m=+0.056289990 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:11:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb  2 05:11:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:41 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2784003cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:41 np0005604790 nova_compute[252672]: 2026-02-02 10:11:41.353 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:41.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:41 np0005604790 nova_compute[252672]: 2026-02-02 10:11:41.407 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2778004200 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:42 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:42 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00ac90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Feb  2 05:11:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:42.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 167 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb  2 05:11:43 np0005604790 kernel: ganesha.nfsd[270357]: segfault at 50 ip 00007f282686b32e sp 00007f27b27fb210 error 4 in libntirpc.so.5.8[7f2826850000+2c000] likely on CPU 0 (core 0, socket 0)
Feb  2 05:11:43 np0005604790 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Feb  2 05:11:43 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[265212]: 02/02/2026 10:11:43 : epoch 69807775 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f279c00ac90 fd 39 proxy ignored for local
Feb  2 05:11:43 np0005604790 systemd[1]: Started Process Core Dump (PID 270554/UID 0).
Feb  2 05:11:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:43.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:44 np0005604790 systemd-coredump[270555]: Process 265216 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 67:#012#0  0x00007f282686b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Feb  2 05:11:44 np0005604790 systemd[1]: systemd-coredump@12-270554-0.service: Deactivated successfully.
Feb  2 05:11:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:44.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:44 np0005604790 podman[270562]: 2026-02-02 10:11:44.49920134 +0000 UTC m=+0.036242849 container died df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:11:44 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ace16943a8d021739a247b6e9fe641916fd292dde1f29b27396041c5e7bc57a5-merged.mount: Deactivated successfully.
Feb  2 05:11:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 198 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb  2 05:11:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:44] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:44] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:44 np0005604790 podman[270562]: 2026-02-02 10:11:44.963236481 +0000 UTC m=+0.500277990 container remove df91742568983fda5d905f1e359292746051f2a6477f239a7ffe11c6d09f1ed4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:11:44 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Main process exited, code=exited, status=139/n/a
Feb  2 05:11:45 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Failed with result 'exit-code'.
Feb  2 05:11:45 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.697s CPU time.
Feb  2 05:11:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:45.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:45.383 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:11:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:45.384 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:11:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:45.384 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:11:46 np0005604790 nova_compute[252672]: 2026-02-02 10:11:46.355 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:46 np0005604790 nova_compute[252672]: 2026-02-02 10:11:46.409 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:46.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 198 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb  2 05:11:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:47.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:11:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:11:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:11:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:11:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:47.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:48.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:11:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [WARNING] 032/101149 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Feb  2 05:11:49 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-nfs-cephfs-compute-0-ooxkuo[97796]: [ALERT] 032/101149 (4) : backend 'backend' has no server available!
Feb  2 05:11:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:49.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:50.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:11:51 np0005604790 nova_compute[252672]: 2026-02-02 10:11:51.357 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:51.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:51 np0005604790 nova_compute[252672]: 2026-02-02 10:11:51.410 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:11:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:53.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:11:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:11:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:11:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:54.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:11:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb  2 05:11:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:54] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:11:54] "GET /metrics HTTP/1.1" 200 48481 "" "Prometheus/2.51.0"
Feb  2 05:11:54 np0005604790 podman[270786]: 2026-02-02 10:11:54.956099028 +0000 UTC m=+0.062596896 container create dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:54.915780712 +0000 UTC m=+0.022278630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:55 np0005604790 systemd[1]: Started libpod-conmon-dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339.scope.
Feb  2 05:11:55 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:55.06393344 +0000 UTC m=+0.170431348 container init dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:55.071214922 +0000 UTC m=+0.177712800 container start dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:11:55 np0005604790 festive_buck[270803]: 167 167
Feb  2 05:11:55 np0005604790 systemd[1]: libpod-dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339.scope: Deactivated successfully.
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:55.085364327 +0000 UTC m=+0.191862215 container attach dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:55.085916751 +0000 UTC m=+0.192414639 container died dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:11:55 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Scheduled restart job, restart counter is at 13.
Feb  2 05:11:55 np0005604790 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:11:55 np0005604790 systemd[1]: ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1@nfs.cephfs.2.0.compute-0.fdwwab.service: Consumed 1.697s CPU time.
Feb  2 05:11:55 np0005604790 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1...
Feb  2 05:11:55 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4970d609a1ff9fc864446465f40227e6ea9507998c9269ff2e10b588108ec4d1-merged.mount: Deactivated successfully.
Feb  2 05:11:55 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:11:55 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:11:55 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:11:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:55.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:55 np0005604790 podman[270786]: 2026-02-02 10:11:55.389474109 +0000 UTC m=+0.495972007 container remove dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:11:55 np0005604790 systemd[1]: libpod-conmon-dbf5f8a348892c155ae8736dba43fdd12d329e021db8813b16abdb6577aa5339.scope: Deactivated successfully.
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.565045612 +0000 UTC m=+0.043271786 container create c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:11:55 np0005604790 podman[270884]: 2026-02-02 10:11:55.585105522 +0000 UTC m=+0.044414285 container create 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 05:11:55 np0005604790 systemd[1]: Started libpod-conmon-c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8.scope.
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917ba9112d8423396aa79ef74dbbfbaff77004804458c96c92321e41f0dd94a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917ba9112d8423396aa79ef74dbbfbaff77004804458c96c92321e41f0dd94a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917ba9112d8423396aa79ef74dbbfbaff77004804458c96c92321e41f0dd94a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3917ba9112d8423396aa79ef74dbbfbaff77004804458c96c92321e41f0dd94a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.fdwwab-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:55 np0005604790 podman[270884]: 2026-02-02 10:11:55.634380785 +0000 UTC m=+0.093689558 container init 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 05:11:55 np0005604790 podman[270884]: 2026-02-02 10:11:55.644218865 +0000 UTC m=+0.103527638 container start 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.551112713 +0000 UTC m=+0.029338917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.64895476 +0000 UTC m=+0.127180954 container init c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Feb  2 05:11:55 np0005604790 bash[270884]: 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9
Feb  2 05:11:55 np0005604790 podman[270884]: 2026-02-02 10:11:55.563791478 +0000 UTC m=+0.023100251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.659569801 +0000 UTC m=+0.137795995 container start c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.662406466 +0000 UTC m=+0.140632640 container attach c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:11:55 np0005604790 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.fdwwab for d241d473-9fcb-5f74-b163-f1ca4454e7f1.
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Feb  2 05:11:55 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:11:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:11:55 np0005604790 sharp_napier[270905]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:11:55 np0005604790 sharp_napier[270905]: --> All data devices are unavailable
Feb  2 05:11:55 np0005604790 systemd[1]: libpod-c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8.scope: Deactivated successfully.
Feb  2 05:11:55 np0005604790 podman[270872]: 2026-02-02 10:11:55.965102871 +0000 UTC m=+0.443329045 container died c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:11:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-59d09f8e700af98a090374dc9ccc04e785ca36b2e239c029b5fd140ba91276b5-merged.mount: Deactivated successfully.
Feb  2 05:11:56 np0005604790 podman[270872]: 2026-02-02 10:11:56.014279572 +0000 UTC m=+0.492505756 container remove c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_napier, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 05:11:56 np0005604790 systemd[1]: libpod-conmon-c328d38f8f7d2765ab4053ae6d1a0ab2b3b0e9295d9884679aaf3d1cfa138ab8.scope: Deactivated successfully.
Feb  2 05:11:56 np0005604790 nova_compute[252672]: 2026-02-02 10:11:56.359 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:56 np0005604790 nova_compute[252672]: 2026-02-02 10:11:56.411 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:56.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.619685561 +0000 UTC m=+0.056668210 container create 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 05:11:56 np0005604790 systemd[1]: Started libpod-conmon-437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b.scope.
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.590382446 +0000 UTC m=+0.027365185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.702776298 +0000 UTC m=+0.139758987 container init 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.711380435 +0000 UTC m=+0.148363104 container start 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:11:56 np0005604790 trusting_vaughan[271086]: 167 167
Feb  2 05:11:56 np0005604790 systemd[1]: libpod-437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b.scope: Deactivated successfully.
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.719097989 +0000 UTC m=+0.156080818 container attach 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.719678845 +0000 UTC m=+0.156661504 container died 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 05:11:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2a0552901feead01bb3ee8641d4bc06526f58326d6843040667068f66121fc30-merged.mount: Deactivated successfully.
Feb  2 05:11:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 18 KiB/s wr, 5 op/s
Feb  2 05:11:56 np0005604790 podman[271069]: 2026-02-02 10:11:56.847039663 +0000 UTC m=+0.284022352 container remove 437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_vaughan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 05:11:56 np0005604790 systemd[1]: libpod-conmon-437a5154f3e46f20970a57d82d22415b868b170f86da117180800bd60787065b.scope: Deactivated successfully.
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.061574736 +0000 UTC m=+0.068760919 container create 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:11:57 np0005604790 systemd[1]: Started libpod-conmon-4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49.scope.
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.026229402 +0000 UTC m=+0.033415555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252d8380ad1b1aadc35742cb1d206a62e6d16b49ebaa17296ddf2c19e4396cae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252d8380ad1b1aadc35742cb1d206a62e6d16b49ebaa17296ddf2c19e4396cae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252d8380ad1b1aadc35742cb1d206a62e6d16b49ebaa17296ddf2c19e4396cae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:57 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252d8380ad1b1aadc35742cb1d206a62e6d16b49ebaa17296ddf2c19e4396cae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:11:57.166Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.227071063 +0000 UTC m=+0.234257206 container init 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.237725235 +0000 UTC m=+0.244911378 container start 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.316354204 +0000 UTC m=+0.323540347 container attach 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:11:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:11:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:11:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:11:57 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:57.480 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:11:57 np0005604790 nova_compute[252672]: 2026-02-02 10:11:57.481 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:11:57 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:57.482 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]: {
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:    "1": [
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:        {
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "devices": [
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "/dev/loop3"
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            ],
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "lv_name": "ceph_lv0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "lv_size": "21470642176",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "name": "ceph_lv0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "tags": {
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.cluster_name": "ceph",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.crush_device_class": "",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.encrypted": "0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.osd_id": "1",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.type": "block",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.vdo": "0",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:                "ceph.with_tpm": "0"
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            },
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "type": "block",
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:            "vg_name": "ceph_vg0"
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:        }
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]:    ]
Feb  2 05:11:57 np0005604790 hardcore_solomon[271126]: }
Feb  2 05:11:57 np0005604790 systemd[1]: libpod-4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49.scope: Deactivated successfully.
Feb  2 05:11:57 np0005604790 podman[271110]: 2026-02-02 10:11:57.544403455 +0000 UTC m=+0.551589608 container died 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 05:11:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-252d8380ad1b1aadc35742cb1d206a62e6d16b49ebaa17296ddf2c19e4396cae-merged.mount: Deactivated successfully.
Feb  2 05:11:58 np0005604790 podman[271110]: 2026-02-02 10:11:58.199578341 +0000 UTC m=+1.206764524 container remove 4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_solomon, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:11:58 np0005604790 systemd[1]: libpod-conmon-4c4567d8b27a28fddc2c1f7479c2236ff8b315431376107aa9e0d383f292ec49.scope: Deactivated successfully.
Feb  2 05:11:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:11:58.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:58 np0005604790 podman[271241]: 2026-02-02 10:11:58.756255492 +0000 UTC m=+0.105237654 container create a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:11:58 np0005604790 podman[271241]: 2026-02-02 10:11:58.669639722 +0000 UTC m=+0.018621884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 19 KiB/s wr, 6 op/s
Feb  2 05:11:58 np0005604790 systemd[1]: Started libpod-conmon-a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e.scope.
Feb  2 05:11:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:58 np0005604790 podman[271241]: 2026-02-02 10:11:58.975238733 +0000 UTC m=+0.324220985 container init a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:11:58 np0005604790 podman[271241]: 2026-02-02 10:11:58.98192329 +0000 UTC m=+0.330905462 container start a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 05:11:58 np0005604790 quizzical_germain[271257]: 167 167
Feb  2 05:11:58 np0005604790 systemd[1]: libpod-a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e.scope: Deactivated successfully.
Feb  2 05:11:59 np0005604790 podman[271241]: 2026-02-02 10:11:59.022126833 +0000 UTC m=+0.371109025 container attach a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 05:11:59 np0005604790 podman[271241]: 2026-02-02 10:11:59.024031644 +0000 UTC m=+0.373013816 container died a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb  2 05:11:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d4a5f8cf37e290bf2019770ae7467a586ac74e93715aecb8b8781c2efd9e000c-merged.mount: Deactivated successfully.
Feb  2 05:11:59 np0005604790 podman[271241]: 2026-02-02 10:11:59.281683887 +0000 UTC m=+0.630666069 container remove a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:11:59 np0005604790 systemd[1]: libpod-conmon-a2d073540a1989f6fc8b1f25c21572f863a24edafed8fba1c0bc82d304c6ac1e.scope: Deactivated successfully.
Feb  2 05:11:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:11:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:11:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:11:59.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:11:59 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:11:59.485 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:11:59 np0005604790 podman[271283]: 2026-02-02 10:11:59.493153819 +0000 UTC m=+0.099177453 container create e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:11:59 np0005604790 podman[271283]: 2026-02-02 10:11:59.417394116 +0000 UTC m=+0.023417770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:11:59 np0005604790 systemd[1]: Started libpod-conmon-e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e.scope.
Feb  2 05:11:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:11:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e480c6bdfecae79fdd7056c12874b1cd66dbc798745a6bc3d020fa88bc481f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e480c6bdfecae79fdd7056c12874b1cd66dbc798745a6bc3d020fa88bc481f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e480c6bdfecae79fdd7056c12874b1cd66dbc798745a6bc3d020fa88bc481f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e480c6bdfecae79fdd7056c12874b1cd66dbc798745a6bc3d020fa88bc481f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:11:59 np0005604790 podman[271283]: 2026-02-02 10:11:59.670283494 +0000 UTC m=+0.276307198 container init e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:11:59 np0005604790 podman[271283]: 2026-02-02 10:11:59.680514224 +0000 UTC m=+0.286537868 container start e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:11:59 np0005604790 podman[271283]: 2026-02-02 10:11:59.70687005 +0000 UTC m=+0.312893694 container attach e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 05:12:00 np0005604790 lvm[271377]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:12:00 np0005604790 lvm[271377]: VG ceph_vg0 finished
Feb  2 05:12:00 np0005604790 amazing_ramanujan[271300]: {}
Feb  2 05:12:00 np0005604790 systemd[1]: libpod-e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e.scope: Deactivated successfully.
Feb  2 05:12:00 np0005604790 podman[271283]: 2026-02-02 10:12:00.319758378 +0000 UTC m=+0.925782052 container died e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:12:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-80e480c6bdfecae79fdd7056c12874b1cd66dbc798745a6bc3d020fa88bc481f-merged.mount: Deactivated successfully.
Feb  2 05:12:00 np0005604790 podman[271283]: 2026-02-02 10:12:00.442027691 +0000 UTC m=+1.048051325 container remove e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 05:12:00 np0005604790 systemd[1]: libpod-conmon-e3c7be192968908f34a453cbc4e51d12571e155c7cda2dedf4f568127848ff5e.scope: Deactivated successfully.
Feb  2 05:12:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:12:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:12:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:12:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:12:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb  2 05:12:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:01.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:01 np0005604790 nova_compute[252672]: 2026-02-02 10:12:01.398 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:01 np0005604790 nova_compute[252672]: 2026-02-02 10:12:01.412 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:12:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:12:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:12:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:12:02 np0005604790 podman[271447]: 2026-02-02 10:12:02.383650428 +0000 UTC m=+0.089099548 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb  2 05:12:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 200 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb  2 05:12:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:04.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 121 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Feb  2 05:12:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:04] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Feb  2 05:12:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:04] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Feb  2 05:12:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:06 np0005604790 nova_compute[252672]: 2026-02-02 10:12:06.399 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:06 np0005604790 nova_compute[252672]: 2026-02-02 10:12:06.413 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 121 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 6.7 KiB/s wr, 29 op/s
Feb  2 05:12:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:07.169Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:12:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:07.169Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:12:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:07.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 7.9 KiB/s wr, 57 op/s
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.304 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.304 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.305 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.305 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.305 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:12:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219362887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.783 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.924 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.925 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.926 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:09 np0005604790 nova_compute[252672]: 2026-02-02 10:12:09.926 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.001 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.002 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.027 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:12:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263364014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.457 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.463 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:12:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.532 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.534 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:12:10 np0005604790 nova_compute[252672]: 2026-02-02 10:12:10.534 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 KiB/s wr, 56 op/s
Feb  2 05:12:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:11 np0005604790 podman[271525]: 2026-02-02 10:12:11.374444885 +0000 UTC m=+0.090296669 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:12:11 np0005604790 nova_compute[252672]: 2026-02-02 10:12:11.402 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:11.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:11 np0005604790 nova_compute[252672]: 2026-02-02 10:12:11.414 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:12 np0005604790 nova_compute[252672]: 2026-02-02 10:12:12.535 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:12 np0005604790 nova_compute[252672]: 2026-02-02 10:12:12.536 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.8 KiB/s wr, 56 op/s
Feb  2 05:12:13 np0005604790 nova_compute[252672]: 2026-02-02 10:12:13.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:13 np0005604790 nova_compute[252672]: 2026-02-02 10:12:13.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:12:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:14 np0005604790 nova_compute[252672]: 2026-02-02 10:12:14.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:14 np0005604790 nova_compute[252672]: 2026-02-02 10:12:14.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:12:14 np0005604790 nova_compute[252672]: 2026-02-02 10:12:14.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:12:14 np0005604790 nova_compute[252672]: 2026-02-02 10:12:14.300 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:12:14 np0005604790 nova_compute[252672]: 2026-02-02 10:12:14.300 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:14.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 KiB/s wr, 56 op/s
Feb  2 05:12:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:14] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Feb  2 05:12:15 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:14] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Feb  2 05:12:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:15.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:16 np0005604790 nova_compute[252672]: 2026-02-02 10:12:16.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:16 np0005604790 nova_compute[252672]: 2026-02-02 10:12:16.403 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:16 np0005604790 nova_compute[252672]: 2026-02-02 10:12:16.416 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:16.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:12:17
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'vms', '.nfs', '.mgr', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:12:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:17.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:12:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:12:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:17.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:12:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:12:18 np0005604790 nova_compute[252672]: 2026-02-02 10:12:18.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:12:19 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 05:12:19 np0005604790 nova_compute[252672]: 2026-02-02 10:12:19.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:19.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:21 np0005604790 nova_compute[252672]: 2026-02-02 10:12:21.407 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:21 np0005604790 nova_compute[252672]: 2026-02-02 10:12:21.418 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:22 np0005604790 nova_compute[252672]: 2026-02-02 10:12:22.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:12:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:23.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:24.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:12:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:24] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:24] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:25.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.420 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.422 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.422 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.422 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.449 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:26 np0005604790 nova_compute[252672]: 2026-02-02 10:12:26.450 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:26.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:27.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:12:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:27.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:28.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:12:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:29.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:30.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:31.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:31 np0005604790 nova_compute[252672]: 2026-02-02 10:12:31.450 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:31 np0005604790 nova_compute[252672]: 2026-02-02 10:12:31.451 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:12:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:12:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:33 np0005604790 podman[271591]: 2026-02-02 10:12:33.352262178 +0000 UTC m=+0.072716964 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 05:12:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:33.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:34.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:12:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:34 np0005604790 nova_compute[252672]: 2026-02-02 10:12:34.892 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:34 np0005604790 nova_compute[252672]: 2026-02-02 10:12:34.893 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:34 np0005604790 nova_compute[252672]: 2026-02-02 10:12:34.920 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.058 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.058 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.065 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.065 252676 INFO nova.compute.claims [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.191 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:35.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:12:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176901229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.645 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.652 252676 DEBUG nova.compute.provider_tree [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.674 252676 DEBUG nova.scheduler.client.report [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.718 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.719 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.786 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.786 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.819 252676 INFO nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.852 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.979 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.980 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb  2 05:12:35 np0005604790 nova_compute[252672]: 2026-02-02 10:12:35.981 252676 INFO nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Creating image(s)#033[00m
Feb  2 05:12:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.010 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.042 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.076 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.082 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.137 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.138 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.139 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.140 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "b48fe8b86a7168723be684d0fce89ef3f0abcc61" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.175 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.180 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 51640fdb-9bb5-4927-8293-08caaa532942_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.449 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b48fe8b86a7168723be684d0fce89ef3f0abcc61 51640fdb-9bb5-4927-8293-08caaa532942_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.487 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.489 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:36.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.539 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] resizing rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.651 252676 DEBUG nova.objects.instance [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'migration_context' on Instance uuid 51640fdb-9bb5-4927-8293-08caaa532942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.668 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.668 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Ensure instance console log exists: /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.669 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.669 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.669 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:36 np0005604790 nova_compute[252672]: 2026-02-02 10:12:36.679 252676 DEBUG nova.policy [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b1695a2a70d4aa0aa350ba17d8f6d5e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb  2 05:12:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 41 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.883997) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027156884046, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1302, "num_deletes": 501, "total_data_size": 1688727, "memory_usage": 1716576, "flush_reason": "Manual Compaction"}
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027156901436, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1165947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28199, "largest_seqno": 29500, "table_properties": {"data_size": 1161140, "index_size": 1755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15357, "raw_average_key_size": 19, "raw_value_size": 1148938, "raw_average_value_size": 1467, "num_data_blocks": 77, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027068, "oldest_key_time": 1770027068, "file_creation_time": 1770027156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17552 microseconds, and 4637 cpu microseconds.
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.901548) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1165947 bytes OK
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.901574) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.905610) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.905648) EVENT_LOG_v1 {"time_micros": 1770027156905638, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.905676) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1681932, prev total WAL file size 1681932, number of live WAL files 2.
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.906788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1138KB)], [62(15MB)]
Feb  2 05:12:36 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027156906906, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 17278303, "oldest_snapshot_seqno": -1}
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5783 keys, 11645075 bytes, temperature: kUnknown
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027157042611, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11645075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11609113, "index_size": 20428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149579, "raw_average_key_size": 25, "raw_value_size": 11507476, "raw_average_value_size": 1989, "num_data_blocks": 818, "num_entries": 5783, "num_filter_entries": 5783, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.043004) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11645075 bytes
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.048907) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 85.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 15.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(24.8) write-amplify(10.0) OK, records in: 6769, records dropped: 986 output_compression: NoCompression
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.048942) EVENT_LOG_v1 {"time_micros": 1770027157048926, "job": 34, "event": "compaction_finished", "compaction_time_micros": 135781, "compaction_time_cpu_micros": 28150, "output_level": 6, "num_output_files": 1, "total_output_size": 11645075, "num_input_records": 6769, "num_output_records": 5783, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027157049447, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027157052482, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:36.906575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.052560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.052567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.052745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.052753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:12:37.052757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:12:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:37.171Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:12:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:37.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:38 np0005604790 nova_compute[252672]: 2026-02-02 10:12:38.241 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Successfully created port: 792f51ec-051b-472a-bfc0-65b93275a823 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb  2 05:12:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:38.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.407 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Successfully updated port: 792f51ec-051b-472a-bfc0-65b93275a823 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.428 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.429 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquired lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.429 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb  2 05:12:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:39.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=cleanup t=2026-02-02T10:12:39.491304851Z level=info msg="Completed cleanup jobs" duration=23.875611ms
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.581 252676 DEBUG nova.compute.manager [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-changed-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.582 252676 DEBUG nova.compute.manager [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing instance network info cache due to event network-changed-792f51ec-051b-472a-bfc0-65b93275a823. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.582 252676 DEBUG oslo_concurrency.lockutils [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:12:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugins.update.checker t=2026-02-02T10:12:39.593651488Z level=info msg="Update check succeeded" duration=48.991286ms
Feb  2 05:12:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana.update.checker t=2026-02-02T10:12:39.61301736Z level=info msg="Update check succeeded" duration=70.944316ms
Feb  2 05:12:39 np0005604790 nova_compute[252672]: 2026-02-02 10:12:39.645 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.499 252676 DEBUG nova.network.neutron [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updating instance_info_cache with network_info: [{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.516 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Releasing lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.516 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Instance network_info: |[{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.516 252676 DEBUG oslo_concurrency.lockutils [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.517 252676 DEBUG nova.network.neutron [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.519 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Start _get_guest_xml network_info=[{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'encryption_options': None, 'device_type': 'disk', 'size': 0, 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd5e062d7-95ef-409c-9ad0-60f7cf6f44ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.524 252676 WARNING nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.528 252676 DEBUG nova.virt.libvirt.host [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb  2 05:12:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.529 252676 DEBUG nova.virt.libvirt.host [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.535 252676 DEBUG nova.virt.libvirt.host [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.535 252676 DEBUG nova.virt.libvirt.host [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.536 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.536 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-02T10:01:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1194feb9-e285-414e-825a-1e77171d092f',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-02T10:01:42Z,direct_url=<?>,disk_format='qcow2',id=d5e062d7-95ef-409c-9ad0-60f7cf6f44ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='823d3e7e313a44e9a50531e3fef22a1b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-02T10:01:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.537 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.537 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.537 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.537 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.537 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.538 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.538 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.538 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.538 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.538 252676 DEBUG nova.virt.hardware [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb  2 05:12:40 np0005604790 nova_compute[252672]: 2026-02-02 10:12:40.541 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:12:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:12:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433165583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.044 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.078 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.084 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:41.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.490 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.492 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.492 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.492 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb  2 05:12:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208385706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.499 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.501 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.516 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.519 252676 DEBUG nova.virt.libvirt.vif [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-682360606',display_name='tempest-TestNetworkBasicOps-server-682360606',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-682360606',id=13,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILK3wOwGXM1goDImpFgEf77tzBq6YVZpuogyNt7BE+Zy6UdsArmDPsiLvX2YZMZ50Eg3ODUS0PcWsAtlmHfFptp//Krplct4XxXAHZauV/PWIHH81rXtn5nOQYjLrfoqA==',key_name='tempest-TestNetworkBasicOps-1013861554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-g9n2n1hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:12:35Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=51640fdb-9bb5-4927-8293-08caaa532942,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.520 252676 DEBUG nova.network.os_vif_util [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.522 252676 DEBUG nova.network.os_vif_util [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.524 252676 DEBUG nova.objects.instance [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 51640fdb-9bb5-4927-8293-08caaa532942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.547 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] End _get_guest_xml xml=<domain type="kvm">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <uuid>51640fdb-9bb5-4927-8293-08caaa532942</uuid>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <name>instance-0000000d</name>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <memory>131072</memory>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <vcpu>1</vcpu>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <metadata>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:name>tempest-TestNetworkBasicOps-server-682360606</nova:name>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:creationTime>2026-02-02 10:12:40</nova:creationTime>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:flavor name="m1.nano">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:memory>128</nova:memory>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:disk>1</nova:disk>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:swap>0</nova:swap>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:ephemeral>0</nova:ephemeral>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:vcpus>1</nova:vcpus>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </nova:flavor>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:owner>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:user uuid="1b1695a2a70d4aa0aa350ba17d8f6d5e">tempest-TestNetworkBasicOps-793549693-project-member</nova:user>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:project uuid="efbfe697ca674d72b47da5adf3e42c0c">tempest-TestNetworkBasicOps-793549693</nova:project>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </nova:owner>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:root type="image" uuid="d5e062d7-95ef-409c-9ad0-60f7cf6f44ce"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <nova:ports>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <nova:port uuid="792f51ec-051b-472a-bfc0-65b93275a823">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        </nova:port>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </nova:ports>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </nova:instance>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </metadata>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <sysinfo type="smbios">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <system>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="manufacturer">RDO</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="product">OpenStack Compute</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="serial">51640fdb-9bb5-4927-8293-08caaa532942</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="uuid">51640fdb-9bb5-4927-8293-08caaa532942</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <entry name="family">Virtual Machine</entry>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </system>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </sysinfo>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <os>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <type arch="x86_64" machine="q35">hvm</type>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <boot dev="hd"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <smbios mode="sysinfo"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </os>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <features>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <acpi/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <apic/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <vmcoreinfo/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </features>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <clock offset="utc">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <timer name="pit" tickpolicy="delay"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <timer name="rtc" tickpolicy="catchup"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <timer name="hpet" present="no"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </clock>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <cpu mode="host-model" match="exact">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <topology sockets="1" cores="1" threads="1"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </cpu>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  <devices>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <disk type="network" device="disk">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/51640fdb-9bb5-4927-8293-08caaa532942_disk">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <target dev="vda" bus="virtio"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <disk type="network" device="cdrom">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <driver type="raw" cache="none"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <source protocol="rbd" name="vms/51640fdb-9bb5-4927-8293-08caaa532942_disk.config">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.100" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.102" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <host name="192.168.122.101" port="6789"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </source>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <auth username="openstack">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:        <secret type="ceph" uuid="d241d473-9fcb-5f74-b163-f1ca4454e7f1"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      </auth>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <target dev="sda" bus="sata"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </disk>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <interface type="ethernet">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <mac address="fa:16:3e:b3:52:4f"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <driver name="vhost" rx_queue_size="512"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <mtu size="1442"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <target dev="tap792f51ec-05"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </interface>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <serial type="pty">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <log file="/var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/console.log" append="off"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </serial>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <video>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <model type="virtio"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </video>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <input type="tablet" bus="usb"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <rng model="virtio">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <backend model="random">/dev/urandom</backend>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </rng>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="pci" model="pcie-root-port"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <controller type="usb" index="0"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    <memballoon model="virtio">
Feb  2 05:12:41 np0005604790 nova_compute[252672]:      <stats period="10"/>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:    </memballoon>
Feb  2 05:12:41 np0005604790 nova_compute[252672]:  </devices>
Feb  2 05:12:41 np0005604790 nova_compute[252672]: </domain>
Feb  2 05:12:41 np0005604790 nova_compute[252672]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.548 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Preparing to wait for external event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.548 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.549 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.549 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.551 252676 DEBUG nova.virt.libvirt.vif [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-02T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-682360606',display_name='tempest-TestNetworkBasicOps-server-682360606',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-682360606',id=13,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILK3wOwGXM1goDImpFgEf77tzBq6YVZpuogyNt7BE+Zy6UdsArmDPsiLvX2YZMZ50Eg3ODUS0PcWsAtlmHfFptp//Krplct4XxXAHZauV/PWIHH81rXtn5nOQYjLrfoqA==',key_name='tempest-TestNetworkBasicOps-1013861554',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-g9n2n1hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-02T10:12:35Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=51640fdb-9bb5-4927-8293-08caaa532942,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.551 252676 DEBUG nova.network.os_vif_util [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.552 252676 DEBUG nova.network.os_vif_util [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.553 252676 DEBUG os_vif [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.554 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.554 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.555 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.560 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.561 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap792f51ec-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.562 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap792f51ec-05, col_values=(('external_ids', {'iface-id': '792f51ec-051b-472a-bfc0-65b93275a823', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:52:4f', 'vm-uuid': '51640fdb-9bb5-4927-8293-08caaa532942'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.565 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:41 np0005604790 NetworkManager[49024]: <info>  [1770027161.5669] manager: (tap792f51ec-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.568 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.576 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.580 252676 INFO os_vif [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05')#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.654 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.655 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.655 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] No VIF found with MAC fa:16:3e:b3:52:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.656 252676 INFO nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Using config drive#033[00m
Feb  2 05:12:41 np0005604790 nova_compute[252672]: 2026-02-02 10:12:41.690 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:42 np0005604790 podman[271922]: 2026-02-02 10:12:42.351130208 +0000 UTC m=+0.065714058 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.413 252676 INFO nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Creating config drive at /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.416 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwlxciubn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.438 252676 DEBUG nova.network.neutron [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updated VIF entry in instance network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.438 252676 DEBUG nova.network.neutron [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updating instance_info_cache with network_info: [{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.459 252676 DEBUG oslo_concurrency.lockutils [req-195b4100-3c2e-44c1-be32-5a25e0d0f9a5 req-d6191ec0-24f2-4222-89ae-9978a4fba110 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:12:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.547 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwlxciubn" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.589 252676 DEBUG nova.storage.rbd_utils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] rbd image 51640fdb-9bb5-4927-8293-08caaa532942_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.595 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config 51640fdb-9bb5-4927-8293-08caaa532942_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.776 252676 DEBUG oslo_concurrency.processutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config 51640fdb-9bb5-4927-8293-08caaa532942_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.778 252676 INFO nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Deleting local config drive /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942/disk.config because it was imported into RBD.#033[00m
Feb  2 05:12:42 np0005604790 systemd[1]: Starting libvirt secret daemon...
Feb  2 05:12:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb  2 05:12:42 np0005604790 systemd[1]: Started libvirt secret daemon.
Feb  2 05:12:42 np0005604790 kernel: tap792f51ec-05: entered promiscuous mode
Feb  2 05:12:42 np0005604790 NetworkManager[49024]: <info>  [1770027162.8908] manager: (tap792f51ec-05): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Feb  2 05:12:42 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:42Z|00078|binding|INFO|Claiming lport 792f51ec-051b-472a-bfc0-65b93275a823 for this chassis.
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.889 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:42 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:42Z|00079|binding|INFO|792f51ec-051b-472a-bfc0-65b93275a823: Claiming fa:16:3e:b3:52:4f 10.100.0.4
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.907 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:52:4f 10.100.0.4'], port_security=['fa:16:3e:b3:52:4f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '51640fdb-9bb5-4927-8293-08caaa532942', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9a5f7f58-6605-4d49-8d32-d2d771025f9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a87091be-e2aa-4065-85c2-dc42a077dfe1, chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=792f51ec-051b-472a-bfc0-65b93275a823) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.908 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 792f51ec-051b-472a-bfc0-65b93275a823 in datapath 31e2c386-2e8c-4f03-82cf-3176ce6f5a71 bound to our chassis#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.910 165364 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e2c386-2e8c-4f03-82cf-3176ce6f5a71#033[00m
Feb  2 05:12:42 np0005604790 systemd-udevd[272013]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:12:42 np0005604790 systemd-machined[219024]: New machine qemu-5-instance-0000000d.
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.923 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[d5249825-1a2f-47ba-be2c-876c81f54da3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.925 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31e2c386-21 in ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.927 257524 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31e2c386-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.927 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc06a9b-0ab2-45c2-95ea-03cc013e9f32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.929 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[5c128c9a-0235-4e86-bdc4-9538e31d0052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:42 np0005604790 NetworkManager[49024]: <info>  [1770027162.9341] device (tap792f51ec-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb  2 05:12:42 np0005604790 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Feb  2 05:12:42 np0005604790 NetworkManager[49024]: <info>  [1770027162.9350] device (tap792f51ec-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.934 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:42 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:42Z|00080|binding|INFO|Setting lport 792f51ec-051b-472a-bfc0-65b93275a823 ovn-installed in OVS
Feb  2 05:12:42 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:42Z|00081|binding|INFO|Setting lport 792f51ec-051b-472a-bfc0-65b93275a823 up in Southbound
Feb  2 05:12:42 np0005604790 nova_compute[252672]: 2026-02-02 10:12:42.943 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.943 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[33efa110-8f02-4a6d-88ca-0abec57cf350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.966 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[184fdd46-9d26-483c-9f3b-3eadb8647498]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:42 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:42.995 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2f4f93-4c14-4e49-a379-a310f559dd94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 systemd-udevd[272018]: Network interface NamePolicy= disabled on kernel command line.
Feb  2 05:12:43 np0005604790 NetworkManager[49024]: <info>  [1770027163.0019] manager: (tap31e2c386-20): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.001 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[02b3f87f-ec3d-4f3a-8b8b-f51c95b6da5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.029 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[961ce392-6dfc-4b82-9651-94fff089d867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.031 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[510fc874-5bbf-44b4-a364-15fd5d102f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 NetworkManager[49024]: <info>  [1770027163.0520] device (tap31e2c386-20): carrier: link connected
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.062 257582 DEBUG oslo.privsep.daemon [-] privsep: reply[a7720db6-a541-46b4-ac04-f56a364e285c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.085 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[28664926-93d1-46da-8f3a-01a2fe5a370c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e2c386-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:3c:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432233, 'reachable_time': 33312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272047, 'error': None, 'target': 'ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.106 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a8390c1b-a828-49c4-8d62-1feda02e75d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:3c52'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432233, 'tstamp': 432233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272048, 'error': None, 'target': 'ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.126 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[c03f8d41-8225-45a7-a85a-10e6b1653532]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e2c386-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:3c:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432233, 'reachable_time': 33312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272049, 'error': None, 'target': 'ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.158 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[817e0024-267b-4fe7-9f27-b802b4cb103b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.160 252676 DEBUG nova.compute.manager [req-68739d28-22d6-4651-9fb1-809380c8da1c req-192f0d1c-7317-447e-ac26-3b1148cc097f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.161 252676 DEBUG oslo_concurrency.lockutils [req-68739d28-22d6-4651-9fb1-809380c8da1c req-192f0d1c-7317-447e-ac26-3b1148cc097f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.161 252676 DEBUG oslo_concurrency.lockutils [req-68739d28-22d6-4651-9fb1-809380c8da1c req-192f0d1c-7317-447e-ac26-3b1148cc097f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.161 252676 DEBUG oslo_concurrency.lockutils [req-68739d28-22d6-4651-9fb1-809380c8da1c req-192f0d1c-7317-447e-ac26-3b1148cc097f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.162 252676 DEBUG nova.compute.manager [req-68739d28-22d6-4651-9fb1-809380c8da1c req-192f0d1c-7317-447e-ac26-3b1148cc097f b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Processing event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.218 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a3c0f0-d691-433f-bb85-e0b16d619b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.220 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e2c386-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.220 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.221 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e2c386-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.223 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:43 np0005604790 NetworkManager[49024]: <info>  [1770027163.2243] manager: (tap31e2c386-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb  2 05:12:43 np0005604790 kernel: tap31e2c386-20: entered promiscuous mode
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.230 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e2c386-20, col_values=(('external_ids', {'iface-id': 'ce0ea125-e6c2-41cd-b9ad-71cce6387108'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.231 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:43 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:43Z|00082|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.235 165364 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31e2c386-2e8c-4f03-82cf-3176ce6f5a71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31e2c386-2e8c-4f03-82cf-3176ce6f5a71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.236 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[cf51459d-b078-42d7-83d5-89bde6b34f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.238 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.239 165364 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: global
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    log         /dev/log local0 debug
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    log-tag     haproxy-metadata-proxy-31e2c386-2e8c-4f03-82cf-3176ce6f5a71
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    user        root
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    group       root
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    maxconn     1024
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    pidfile     /var/lib/neutron/external/pids/31e2c386-2e8c-4f03-82cf-3176ce6f5a71.pid.haproxy
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    daemon
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: defaults
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    log global
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    mode http
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    option httplog
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    option dontlognull
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    option http-server-close
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    option forwardfor
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    retries                 3
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    timeout http-request    30s
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    timeout connect         30s
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    timeout client          32s
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    timeout server          32s
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    timeout http-keep-alive 30s
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: listen listener
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    bind 169.254.169.254:80
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    server metadata /var/lib/neutron/metadata_proxy
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]:    http-request add-header X-OVN-Network-ID 31e2c386-2e8c-4f03-82cf-3176ce6f5a71
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb  2 05:12:43 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:43.240 165364 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'env', 'PROCESS_TAG=haproxy-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31e2c386-2e8c-4f03-82cf-3176ce6f5a71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb  2 05:12:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.531 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770027163.5310159, 51640fdb-9bb5-4927-8293-08caaa532942 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.532 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] VM Started (Lifecycle Event)#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.538 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.543 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.548 252676 INFO nova.virt.libvirt.driver [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Instance spawned successfully.#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.548 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.556 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.560 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.574 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.574 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.575 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.575 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.576 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.576 252676 DEBUG nova.virt.libvirt.driver [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb  2 05:12:43 np0005604790 podman[272123]: 2026-02-02 10:12:43.603491606 +0000 UTC m=+0.048082272 container create 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.607 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.608 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770027163.53124, 51640fdb-9bb5-4927-8293-08caaa532942 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.608 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] VM Paused (Lifecycle Event)#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.638 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.640 252676 DEBUG nova.virt.driver [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] Emitting event <LifecycleEvent: 1770027163.5425262, 51640fdb-9bb5-4927-8293-08caaa532942 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.640 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] VM Resumed (Lifecycle Event)#033[00m
Feb  2 05:12:43 np0005604790 systemd[1]: Started libpod-conmon-94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192.scope.
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.650 252676 INFO nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.651 252676 DEBUG nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.661 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.665 252676 DEBUG nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb  2 05:12:43 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:12:43 np0005604790 podman[272123]: 2026-02-02 10:12:43.57640663 +0000 UTC m=+0.020997356 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc
Feb  2 05:12:43 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839808039b3e351ef9de4c3c5d5c47441143747691389c9e7b3bf1ec38036aae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb  2 05:12:43 np0005604790 podman[272123]: 2026-02-02 10:12:43.688227277 +0000 UTC m=+0.132817973 container init 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:12:43 np0005604790 podman[272123]: 2026-02-02 10:12:43.6936316 +0000 UTC m=+0.138222296 container start 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.707 252676 INFO nova.compute.manager [None req-e8d4f08b-73c0-49f9-aab1-c10f2abef40e - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb  2 05:12:43 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [NOTICE]   (272143) : New worker (272145) forked
Feb  2 05:12:43 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [NOTICE]   (272143) : Loading success.
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.728 252676 INFO nova.compute.manager [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Took 8.70 seconds to build instance.#033[00m
Feb  2 05:12:43 np0005604790 nova_compute[252672]: 2026-02-02 10:12:43.744 252676 DEBUG oslo_concurrency.lockutils [None req-dc054f5a-d3e4-44a6-add6-9e34538d9f0c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:44.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb  2 05:12:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.252 252676 DEBUG nova.compute.manager [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.253 252676 DEBUG oslo_concurrency.lockutils [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.253 252676 DEBUG oslo_concurrency.lockutils [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.254 252676 DEBUG oslo_concurrency.lockutils [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.254 252676 DEBUG nova.compute.manager [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] No waiting events found dispatching network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:12:45 np0005604790 nova_compute[252672]: 2026-02-02 10:12:45.255 252676 WARNING nova.compute.manager [req-46d0b18e-963c-4b23-9d70-7c91f6a39b91 req-0cbf09c0-0c88-42a8-bee4-9af1509c75ad b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received unexpected event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 for instance with vm_state active and task_state None.#033[00m
Feb  2 05:12:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:45.384 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:12:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:45.385 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:12:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:12:45.385 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:12:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:12:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:45.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:12:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:46 np0005604790 nova_compute[252672]: 2026-02-02 10:12:46.500 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:46.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:46 np0005604790 nova_compute[252672]: 2026-02-02 10:12:46.564 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb  2 05:12:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:47.172Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:12:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:47.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:12:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:12:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:12:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:12:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:47.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.008 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:48 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:48Z|00083|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:12:48 np0005604790 NetworkManager[49024]: <info>  [1770027168.0102] manager: (patch-br-int-to-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Feb  2 05:12:48 np0005604790 NetworkManager[49024]: <info>  [1770027168.0116] manager: (patch-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.024 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:48 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:48Z|00084|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.032 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.517 252676 DEBUG nova.compute.manager [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-changed-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.518 252676 DEBUG nova.compute.manager [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing instance network info cache due to event network-changed-792f51ec-051b-472a-bfc0-65b93275a823. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.518 252676 DEBUG oslo_concurrency.lockutils [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.518 252676 DEBUG oslo_concurrency.lockutils [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:12:48 np0005604790 nova_compute[252672]: 2026-02-02 10:12:48.519 252676 DEBUG nova.network.neutron [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:12:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:48.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb  2 05:12:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:49.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:50 np0005604790 nova_compute[252672]: 2026-02-02 10:12:50.036 252676 DEBUG nova.network.neutron [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updated VIF entry in instance network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:12:50 np0005604790 nova_compute[252672]: 2026-02-02 10:12:50.037 252676 DEBUG nova.network.neutron [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updating instance_info_cache with network_info: [{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:12:50 np0005604790 nova_compute[252672]: 2026-02-02 10:12:50.091 252676 DEBUG oslo_concurrency.lockutils [req-24d89faf-1415-4cba-a5fb-a34eb2d6aad2 req-88f9b940-b9bd-44e4-a865-93adebe07f1b b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:12:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:50.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb  2 05:12:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:51 np0005604790 nova_compute[252672]: 2026-02-02 10:12:51.563 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:51 np0005604790 nova_compute[252672]: 2026-02-02 10:12:51.567 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:52.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb  2 05:12:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb  2 05:12:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:12:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:12:54] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Feb  2 05:12:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:12:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232670626' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:12:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:12:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/232670626' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:12:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:12:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:12:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:12:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:12:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:12:56 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:56Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:52:4f 10.100.0.4
Feb  2 05:12:56 np0005604790 ovn_controller[154631]: 2026-02-02T10:12:56Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:52:4f 10.100.0.4
Feb  2 05:12:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:12:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.568 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.570 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.570 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.570 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.588 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:12:56 np0005604790 nova_compute[252672]: 2026-02-02 10:12:56.589 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb  2 05:12:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 88 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb  2 05:12:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:12:57.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:12:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:12:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:12:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:12:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:12:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:12:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Feb  2 05:12:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:12:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:12:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:12:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:13:00 np0005604790 nova_compute[252672]: 2026-02-02 10:13:00.900 252676 INFO nova.compute.manager [None req-f9b63266-8427-463f-a8fa-90c3311e0b83 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Get console output#033[00m
Feb  2 05:13:00 np0005604790 nova_compute[252672]: 2026-02-02 10:13:00.906 258300 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb  2 05:13:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:01.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:01 np0005604790 nova_compute[252672]: 2026-02-02 10:13:01.589 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:01 np0005604790 nova_compute[252672]: 2026-02-02 10:13:01.591 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:01 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:01Z|00085|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:13:01 np0005604790 nova_compute[252672]: 2026-02-02 10:13:01.728 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:01 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:01Z|00086|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:13:01 np0005604790 nova_compute[252672]: 2026-02-02 10:13:01.747 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:13:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:13:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.613281503 +0000 UTC m=+0.047185449 container create 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Feb  2 05:13:02 np0005604790 systemd[1]: Started libpod-conmon-2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997.scope.
Feb  2 05:13:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.592275828 +0000 UTC m=+0.026179754 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.698834756 +0000 UTC m=+0.132738712 container init 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.70543394 +0000 UTC m=+0.139337846 container start 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.709854437 +0000 UTC m=+0.143758393 container attach 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 05:13:02 np0005604790 relaxed_nash[272459]: 167 167
Feb  2 05:13:02 np0005604790 systemd[1]: libpod-2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997.scope: Deactivated successfully.
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.710917735 +0000 UTC m=+0.144821641 container died 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:13:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d0085182d7879366e4edc35558fff0a5f7eaa7ada080cb7e7d5af2754cf80b11-merged.mount: Deactivated successfully.
Feb  2 05:13:02 np0005604790 podman[272442]: 2026-02-02 10:13:02.762619143 +0000 UTC m=+0.196523049 container remove 2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:13:02 np0005604790 systemd[1]: libpod-conmon-2e7eeac7be6474118c352ef12e7a173a8b5554853ebecfc2801dd88b3c28f997.scope: Deactivated successfully.
Feb  2 05:13:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:13:02 np0005604790 podman[272484]: 2026-02-02 10:13:02.9474483 +0000 UTC m=+0.038679954 container create aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:13:02 np0005604790 systemd[1]: Started libpod-conmon-aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc.scope.
Feb  2 05:13:03 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:02.931357515 +0000 UTC m=+0.022589189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:03.02987802 +0000 UTC m=+0.121109704 container init aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:03.036549196 +0000 UTC m=+0.127780850 container start aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:03.039842454 +0000 UTC m=+0.131074138 container attach aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:03 np0005604790 nova_compute[252672]: 2026-02-02 10:13:03.041 252676 INFO nova.compute.manager [None req-163096ef-0097-41c6-9868-6bcb98926a87 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Get console output#033[00m
Feb  2 05:13:03 np0005604790 nova_compute[252672]: 2026-02-02 10:13:03.052 258300 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb  2 05:13:03 np0005604790 zealous_dubinsky[272500]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:13:03 np0005604790 zealous_dubinsky[272500]: --> All data devices are unavailable
Feb  2 05:13:03 np0005604790 systemd[1]: libpod-aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc.scope: Deactivated successfully.
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:03.410237119 +0000 UTC m=+0.501468783 container died aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:13:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c13454001df33d48e4bc97d707b9ccfcbef471904f7cf1f112c037d519b5500e-merged.mount: Deactivated successfully.
Feb  2 05:13:03 np0005604790 podman[272484]: 2026-02-02 10:13:03.462784328 +0000 UTC m=+0.554015982 container remove aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:13:03 np0005604790 systemd[1]: libpod-conmon-aaceb2b6fa430ef56db657588518625dd39faa4f60c1bd3731436e93a05204dc.scope: Deactivated successfully.
Feb  2 05:13:03 np0005604790 podman[272516]: 2026-02-02 10:13:03.568004951 +0000 UTC m=+0.128294974 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb  2 05:13:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:03 np0005604790 nova_compute[252672]: 2026-02-02 10:13:03.712 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:03 np0005604790 NetworkManager[49024]: <info>  [1770027183.7132] manager: (patch-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb  2 05:13:03 np0005604790 NetworkManager[49024]: <info>  [1770027183.7152] manager: (patch-br-int-to-provnet-3738ab71-03c6-44c1-bc4f-10cf3e96782e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb  2 05:13:03 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:03Z|00087|binding|INFO|Releasing lport ce0ea125-e6c2-41cd-b9ad-71cce6387108 from this chassis (sb_readonly=0)
Feb  2 05:13:03 np0005604790 nova_compute[252672]: 2026-02-02 10:13:03.740 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:03 np0005604790 nova_compute[252672]: 2026-02-02 10:13:03.745 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:04 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.026 252676 INFO nova.compute.manager [None req-568a79e4-04fe-498a-9bd7-a972910f9e7c 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Get console output#033[00m
Feb  2 05:13:04 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.035 258300 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.049690749 +0000 UTC m=+0.044682363 container create 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 05:13:04 np0005604790 systemd[1]: Started libpod-conmon-4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06.scope.
Feb  2 05:13:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.029331451 +0000 UTC m=+0.024323155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.138388604 +0000 UTC m=+0.133380238 container init 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.143757436 +0000 UTC m=+0.138749050 container start 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.146923189 +0000 UTC m=+0.141914833 container attach 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Feb  2 05:13:04 np0005604790 awesome_rosalind[272663]: 167 167
Feb  2 05:13:04 np0005604790 systemd[1]: libpod-4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06.scope: Deactivated successfully.
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.149572579 +0000 UTC m=+0.144564213 container died 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 05:13:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-3abbc29cbeb64981949593b830333b23ee78ae90cafcea38ac27b03df6c09f21-merged.mount: Deactivated successfully.
Feb  2 05:13:04 np0005604790 podman[272647]: 2026-02-02 10:13:04.189130205 +0000 UTC m=+0.184121819 container remove 4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:04 np0005604790 systemd[1]: libpod-conmon-4f122e16253d1aeace8058a53bf1c06e3902954749573acc2dd29c33d0004a06.scope: Deactivated successfully.
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.36780081 +0000 UTC m=+0.061047095 container create 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:13:04 np0005604790 systemd[1]: Started libpod-conmon-2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96.scope.
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.344150025 +0000 UTC m=+0.037396400 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f96096d51923afab8ad1fdc8459224322743681d6969de69dd246a4e41264ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f96096d51923afab8ad1fdc8459224322743681d6969de69dd246a4e41264ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f96096d51923afab8ad1fdc8459224322743681d6969de69dd246a4e41264ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:04 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f96096d51923afab8ad1fdc8459224322743681d6969de69dd246a4e41264ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.47063229 +0000 UTC m=+0.163878565 container init 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.485634407 +0000 UTC m=+0.178880692 container start 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.489732205 +0000 UTC m=+0.182978500 container attach 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:13:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:04 np0005604790 elegant_booth[272704]: {
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:    "1": [
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:        {
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "devices": [
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "/dev/loop3"
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            ],
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "lv_name": "ceph_lv0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "lv_size": "21470642176",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "name": "ceph_lv0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "tags": {
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.cluster_name": "ceph",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.crush_device_class": "",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.encrypted": "0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.osd_id": "1",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.type": "block",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.vdo": "0",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:                "ceph.with_tpm": "0"
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            },
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "type": "block",
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:            "vg_name": "ceph_vg0"
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:        }
Feb  2 05:13:04 np0005604790 elegant_booth[272704]:    ]
Feb  2 05:13:04 np0005604790 elegant_booth[272704]: }
Feb  2 05:13:04 np0005604790 systemd[1]: libpod-2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96.scope: Deactivated successfully.
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.793430196 +0000 UTC m=+0.486676501 container died 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 05:13:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Feb  2 05:13:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6f96096d51923afab8ad1fdc8459224322743681d6969de69dd246a4e41264ab-merged.mount: Deactivated successfully.
Feb  2 05:13:04 np0005604790 podman[272688]: 2026-02-02 10:13:04.847223819 +0000 UTC m=+0.540470124 container remove 2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:04 np0005604790 systemd[1]: libpod-conmon-2b187b12e4916ffcc312df472703f7cc7239ecffe312029bf432a97b5fdc7f96.scope: Deactivated successfully.
Feb  2 05:13:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:04] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:13:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:04] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.998 252676 DEBUG nova.compute.manager [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-changed-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.999 252676 DEBUG nova.compute.manager [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing instance network info cache due to event network-changed-792f51ec-051b-472a-bfc0-65b93275a823. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.999 252676 DEBUG oslo_concurrency.lockutils [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:04.999 252676 DEBUG oslo_concurrency.lockutils [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquired lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.000 252676 DEBUG nova.network.neutron [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Refreshing network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.057 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.057 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.057 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.058 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.058 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.059 252676 INFO nova.compute.manager [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Terminating instance#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.061 252676 DEBUG nova.compute.manager [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb  2 05:13:05 np0005604790 kernel: tap792f51ec-05 (unregistering): left promiscuous mode
Feb  2 05:13:05 np0005604790 NetworkManager[49024]: <info>  [1770027185.1201] device (tap792f51ec-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.168 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.172 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:05Z|00088|binding|INFO|Releasing lport 792f51ec-051b-472a-bfc0-65b93275a823 from this chassis (sb_readonly=0)
Feb  2 05:13:05 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:05Z|00089|binding|INFO|Setting lport 792f51ec-051b-472a-bfc0-65b93275a823 down in Southbound
Feb  2 05:13:05 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:05Z|00090|binding|INFO|Removing iface tap792f51ec-05 ovn-installed in OVS
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.184 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:52:4f 10.100.0.4'], port_security=['fa:16:3e:b3:52:4f 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '51640fdb-9bb5-4927-8293-08caaa532942', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'efbfe697ca674d72b47da5adf3e42c0c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9a5f7f58-6605-4d49-8d32-d2d771025f9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a87091be-e2aa-4065-85c2-dc42a077dfe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>], logical_port=792f51ec-051b-472a-bfc0-65b93275a823) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa8c6e46640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.185 165364 INFO neutron.agent.ovn.metadata.agent [-] Port 792f51ec-051b-472a-bfc0-65b93275a823 in datapath 31e2c386-2e8c-4f03-82cf-3176ce6f5a71 unbound from our chassis#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.186 165364 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31e2c386-2e8c-4f03-82cf-3176ce6f5a71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.185 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.188 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a95ab7-ec56-472f-bfff-af1b541c4e50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.188 165364 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71 namespace which is not needed anymore#033[00m
Feb  2 05:13:05 np0005604790 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb  2 05:13:05 np0005604790 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 12.821s CPU time.
Feb  2 05:13:05 np0005604790 systemd-machined[219024]: Machine qemu-5-instance-0000000d terminated.
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.299 252676 INFO nova.virt.libvirt.driver [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Instance destroyed successfully.#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.300 252676 DEBUG nova.objects.instance [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lazy-loading 'resources' on Instance uuid 51640fdb-9bb5-4927-8293-08caaa532942 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.313 252676 DEBUG nova.virt.libvirt.vif [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-02T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-682360606',display_name='tempest-TestNetworkBasicOps-server-682360606',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-682360606',id=13,image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILK3wOwGXM1goDImpFgEf77tzBq6YVZpuogyNt7BE+Zy6UdsArmDPsiLvX2YZMZ50Eg3ODUS0PcWsAtlmHfFptp//Krplct4XxXAHZauV/PWIHH81rXtn5nOQYjLrfoqA==',key_name='tempest-TestNetworkBasicOps-1013861554',keypairs=<?>,launch_index=0,launched_at=2026-02-02T10:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='efbfe697ca674d72b47da5adf3e42c0c',ramdisk_id='',reservation_id='r-g9n2n1hh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d5e062d7-95ef-409c-9ad0-60f7cf6f44ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-793549693',owner_user_name='tempest-TestNetworkBasicOps-793549693-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-02T10:12:43Z,user_data=None,user_id='1b1695a2a70d4aa0aa350ba17d8f6d5e',uuid=51640fdb-9bb5-4927-8293-08caaa532942,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.313 252676 DEBUG nova.network.os_vif_util [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converting VIF {"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.314 252676 DEBUG nova.network.os_vif_util [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.314 252676 DEBUG os_vif [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.315 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.315 252676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap792f51ec-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.318 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.322 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [NOTICE]   (272143) : haproxy version is 2.8.14-c23fe91
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [NOTICE]   (272143) : path to executable is /usr/sbin/haproxy
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [WARNING]  (272143) : Exiting Master process...
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [WARNING]  (272143) : Exiting Master process...
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [ALERT]    (272143) : Current worker (272145) exited with code 143 (Terminated)
Feb  2 05:13:05 np0005604790 neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71[272138]: [WARNING]  (272143) : All workers exited. Exiting... (0)
Feb  2 05:13:05 np0005604790 systemd[1]: libpod-94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192.scope: Deactivated successfully.
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.325 252676 INFO os_vif [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:52:4f,bridge_name='br-int',has_traffic_filtering=True,id=792f51ec-051b-472a-bfc0-65b93275a823,network=Network(31e2c386-2e8c-4f03-82cf-3176ce6f5a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap792f51ec-05')#033[00m
Feb  2 05:13:05 np0005604790 podman[272803]: 2026-02-02 10:13:05.329292357 +0000 UTC m=+0.059625908 container died 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 05:13:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192-userdata-shm.mount: Deactivated successfully.
Feb  2 05:13:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-839808039b3e351ef9de4c3c5d5c47441143747691389c9e7b3bf1ec38036aae-merged.mount: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272803]: 2026-02-02 10:13:05.377787139 +0000 UTC m=+0.108120690 container cleanup 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 05:13:05 np0005604790 systemd[1]: libpod-conmon-94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192.scope: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272883]: 2026-02-02 10:13:05.449527577 +0000 UTC m=+0.049641604 container remove 94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.454 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[dab7f4db-bd56-4e02-9b6b-30486e188445]: (4, ('Mon Feb  2 10:13:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71 (94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192)\n94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192\nMon Feb  2 10:13:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71 (94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192)\n94ab34e156d899c5dccf08cc1987499e0916f0581aa92549dd5e0fa5e98eb192\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.456 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[e3522d4f-8e3e-4c00-b556-e105d69b9b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.457 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e2c386-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:13:05 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.459 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 kernel: tap31e2c386-20: left promiscuous mode
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.464 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.469 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[84317922-dce8-4fc0-8edf-bccd170d35c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.483 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[8572e8aa-7f8f-4188-81a2-526a6cfced52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.484 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[cc97e26a-1f03-40b6-8382-39b340afba3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.499 257524 DEBUG oslo.privsep.daemon [-] privsep: reply[31ef51b3-9492-4621-84c8-1b8530f1065c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432227, 'reachable_time': 16242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272919, 'error': None, 'target': 'ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.502 166028 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31e2c386-2e8c-4f03-82cf-3176ce6f5a71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.502 166028 DEBUG oslo.privsep.daemon [-] privsep: reply[19803327-c59e-407d-819d-2434673ea5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb  2 05:13:05 np0005604790 systemd[1]: run-netns-ovnmeta\x2d31e2c386\x2d2e8c\x2d4f03\x2d82cf\x2d3176ce6f5a71.mount: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.511727112 +0000 UTC m=+0.036497597 container create 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:13:05 np0005604790 systemd[1]: Started libpod-conmon-60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df.scope.
Feb  2 05:13:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:05.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.584841465 +0000 UTC m=+0.109612000 container init 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.590630468 +0000 UTC m=+0.115400953 container start 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.496635502 +0000 UTC m=+0.021405997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.593758011 +0000 UTC m=+0.118528506 container attach 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:13:05 np0005604790 lucid_babbage[272927]: 167 167
Feb  2 05:13:05 np0005604790 systemd[1]: libpod-60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df.scope: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.596949255 +0000 UTC m=+0.121719780 container died 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.609 252676 DEBUG nova.compute.manager [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-unplugged-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.612 252676 DEBUG oslo_concurrency.lockutils [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.612 252676 DEBUG oslo_concurrency.lockutils [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.612 252676 DEBUG oslo_concurrency.lockutils [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.613 252676 DEBUG nova.compute.manager [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] No waiting events found dispatching network-vif-unplugged-792f51ec-051b-472a-bfc0-65b93275a823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.613 252676 DEBUG nova.compute.manager [req-bbec13f9-f437-4acc-ae7e-fa8f223e04e5 req-e87ac429-b1bc-4eff-ac13-8ec856cff50a b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-unplugged-792f51ec-051b-472a-bfc0-65b93275a823 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb  2 05:13:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c2ff738fb4eb6fe6a88bf3f2d3b9c2e6bee7c840e02cfb96e8a16d806a1b6e6c-merged.mount: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272907]: 2026-02-02 10:13:05.637407855 +0000 UTC m=+0.162178360 container remove 60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 05:13:05 np0005604790 systemd[1]: libpod-conmon-60b79de24a768221a283d81a5557690083b9da77c6b5ec3477f4494005cff6df.scope: Deactivated successfully.
Feb  2 05:13:05 np0005604790 podman[272952]: 2026-02-02 10:13:05.784912046 +0000 UTC m=+0.036849336 container create 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.804 252676 INFO nova.virt.libvirt.driver [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Deleting instance files /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942_del#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.805 252676 INFO nova.virt.libvirt.driver [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Deletion of /var/lib/nova/instances/51640fdb-9bb5-4927-8293-08caaa532942_del complete#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.809 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.809 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:13:05 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:05.810 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:13:05 np0005604790 systemd[1]: Started libpod-conmon-5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129.scope.
Feb  2 05:13:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:13:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089aea958df583b9205f444f1ade29bf57fb744ecc50aeee79b0f5fe52affb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089aea958df583b9205f444f1ade29bf57fb744ecc50aeee79b0f5fe52affb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089aea958df583b9205f444f1ade29bf57fb744ecc50aeee79b0f5fe52affb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2089aea958df583b9205f444f1ade29bf57fb744ecc50aeee79b0f5fe52affb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:13:05 np0005604790 podman[272952]: 2026-02-02 10:13:05.849399531 +0000 UTC m=+0.101336831 container init 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:13:05 np0005604790 podman[272952]: 2026-02-02 10:13:05.854931918 +0000 UTC m=+0.106869198 container start 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:13:05 np0005604790 podman[272952]: 2026-02-02 10:13:05.858088681 +0000 UTC m=+0.110025961 container attach 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 05:13:05 np0005604790 podman[272952]: 2026-02-02 10:13:05.770187756 +0000 UTC m=+0.022125066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.874 252676 INFO nova.compute.manager [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.874 252676 DEBUG oslo.service.loopingcall [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.875 252676 DEBUG nova.compute.manager [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb  2 05:13:05 np0005604790 nova_compute[252672]: 2026-02-02 10:13:05.875 252676 DEBUG nova.network.neutron [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb  2 05:13:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.233 252676 DEBUG nova.network.neutron [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updated VIF entry in instance network info cache for port 792f51ec-051b-472a-bfc0-65b93275a823. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.234 252676 DEBUG nova.network.neutron [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updating instance_info_cache with network_info: [{"id": "792f51ec-051b-472a-bfc0-65b93275a823", "address": "fa:16:3e:b3:52:4f", "network": {"id": "31e2c386-2e8c-4f03-82cf-3176ce6f5a71", "bridge": "br-int", "label": "tempest-network-smoke--1434795689", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "efbfe697ca674d72b47da5adf3e42c0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap792f51ec-05", "ovs_interfaceid": "792f51ec-051b-472a-bfc0-65b93275a823", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.257 252676 DEBUG oslo_concurrency.lockutils [req-243c69f3-e763-4528-b883-9f416a2bb353 req-4a6d430c-3aff-4c89-a005-5cae9dbdf37e b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Releasing lock "refresh_cache-51640fdb-9bb5-4927-8293-08caaa532942" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb  2 05:13:06 np0005604790 lvm[273043]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:13:06 np0005604790 lvm[273043]: VG ceph_vg0 finished
Feb  2 05:13:06 np0005604790 naughty_pare[272969]: {}
Feb  2 05:13:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:06 np0005604790 systemd[1]: libpod-5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129.scope: Deactivated successfully.
Feb  2 05:13:06 np0005604790 systemd[1]: libpod-5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129.scope: Consumed 1.071s CPU time.
Feb  2 05:13:06 np0005604790 podman[272952]: 2026-02-02 10:13:06.583642428 +0000 UTC m=+0.835579758 container died 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.593 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2089aea958df583b9205f444f1ade29bf57fb744ecc50aeee79b0f5fe52affb0-merged.mount: Deactivated successfully.
Feb  2 05:13:06 np0005604790 podman[272952]: 2026-02-02 10:13:06.633940828 +0000 UTC m=+0.885878138 container remove 5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_pare, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:13:06 np0005604790 systemd[1]: libpod-conmon-5348ba7acfe3b5150b3d980437c64815021d15565d2535985c415c517be52129.scope: Deactivated successfully.
Feb  2 05:13:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:13:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:06 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:13:06 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.836 252676 DEBUG nova.network.neutron [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.856 252676 INFO nova.compute.manager [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Took 0.98 seconds to deallocate network for instance.#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.922 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.923 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:06 np0005604790 nova_compute[252672]: 2026-02-02 10:13:06.990 252676 DEBUG oslo_concurrency.processutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:13:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:07.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:13:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:13:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058317273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.440 252676 DEBUG oslo_concurrency.processutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.449 252676 DEBUG nova.compute.provider_tree [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.470 252676 DEBUG nova.scheduler.client.report [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.498 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.533 252676 INFO nova.scheduler.client.report [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Deleted allocations for instance 51640fdb-9bb5-4927-8293-08caaa532942#033[00m
Feb  2 05:13:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:07.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.606 252676 DEBUG oslo_concurrency.lockutils [None req-e613c665-83db-4ff2-874a-9033b2c41a8d 1b1695a2a70d4aa0aa350ba17d8f6d5e efbfe697ca674d72b47da5adf3e42c0c - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.712 252676 DEBUG nova.compute.manager [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.713 252676 DEBUG oslo_concurrency.lockutils [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Acquiring lock "51640fdb-9bb5-4927-8293-08caaa532942-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.713 252676 DEBUG oslo_concurrency.lockutils [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:07 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.714 252676 DEBUG oslo_concurrency.lockutils [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] Lock "51640fdb-9bb5-4927-8293-08caaa532942-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.714 252676 DEBUG nova.compute.manager [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] No waiting events found dispatching network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.714 252676 WARNING nova.compute.manager [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received unexpected event network-vif-plugged-792f51ec-051b-472a-bfc0-65b93275a823 for instance with vm_state deleted and task_state None.#033[00m
Feb  2 05:13:07 np0005604790 nova_compute[252672]: 2026-02-02 10:13:07.715 252676 DEBUG nova.compute.manager [req-12a2f21e-76f5-46ac-b971-4f3dd21e4e86 req-754f59e0-5c7f-4a09-aaf2-1379a2ba2085 b497715c83c54dd784cfd8facd16e324 8bec08e43900467887b10711a12caf82 - - default default] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Received event network-vif-deleted-792f51ec-051b-472a-bfc0-65b93275a823 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb  2 05:13:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb  2 05:13:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.307 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.307 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.308 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.308 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.308 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.328 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:13:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170229159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.781 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:13:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.988 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.990 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4490MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.990 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:10 np0005604790 nova_compute[252672]: 2026-02-02 10:13:10.991 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.116 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.117 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.274 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing inventories for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.332 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating ProviderTree inventory for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.332 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating inventory in ProviderTree for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.346 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing aggregate associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.367 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing trait associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.379 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.563 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.572 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.594 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:13:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161531560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.818 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.825 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.846 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.873 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:13:11 np0005604790 nova_compute[252672]: 2026-02-02 10:13:11.874 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 28 op/s
Feb  2 05:13:13 np0005604790 podman[273159]: 2026-02-02 10:13:13.349448267 +0000 UTC m=+0.061337333 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb  2 05:13:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:14 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:14.812 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:13:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Feb  2 05:13:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:14] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:13:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:14] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Feb  2 05:13:14 np0005604790 nova_compute[252672]: 2026-02-02 10:13:14.874 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:14 np0005604790 nova_compute[252672]: 2026-02-02 10:13:14.876 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:14 np0005604790 nova_compute[252672]: 2026-02-02 10:13:14.876 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:14 np0005604790 nova_compute[252672]: 2026-02-02 10:13:14.876 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.298 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.299 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:15 np0005604790 nova_compute[252672]: 2026-02-02 10:13:15.371 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:16 np0005604790 nova_compute[252672]: 2026-02-02 10:13:16.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:16 np0005604790 nova_compute[252672]: 2026-02-02 10:13:16.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:16.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:16 np0005604790 nova_compute[252672]: 2026-02-02 10:13:16.595 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:13:17
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:13:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:17.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:13:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:17.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:13:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:17.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:13:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:13:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:13:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:13:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:13:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:13:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb  2 05:13:19 np0005604790 nova_compute[252672]: 2026-02-02 10:13:19.315 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000052s ======
Feb  2 05:13:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Feb  2 05:13:20 np0005604790 nova_compute[252672]: 2026-02-02 10:13:20.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:20 np0005604790 nova_compute[252672]: 2026-02-02 10:13:20.296 252676 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1770027185.295388, 51640fdb-9bb5-4927-8293-08caaa532942 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb  2 05:13:20 np0005604790 nova_compute[252672]: 2026-02-02 10:13:20.297 252676 INFO nova.compute.manager [-] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] VM Stopped (Lifecycle Event)#033[00m
Feb  2 05:13:20 np0005604790 nova_compute[252672]: 2026-02-02 10:13:20.317 252676 DEBUG nova.compute.manager [None req-de31705b-1cc6-46e2-8e34-15f3cbb060f1 - - - - - -] [instance: 51640fdb-9bb5-4927-8293-08caaa532942] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb  2 05:13:20 np0005604790 nova_compute[252672]: 2026-02-02 10:13:20.375 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:20.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:21 np0005604790 nova_compute[252672]: 2026-02-02 10:13:21.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:21 np0005604790 nova_compute[252672]: 2026-02-02 10:13:21.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 05:13:21 np0005604790 nova_compute[252672]: 2026-02-02 10:13:21.300 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 05:13:21 np0005604790 nova_compute[252672]: 2026-02-02 10:13:21.596 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:13:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:13:25 np0005604790 nova_compute[252672]: 2026-02-02 10:13:25.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:13:25 np0005604790 nova_compute[252672]: 2026-02-02 10:13:25.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 05:13:25 np0005604790 nova_compute[252672]: 2026-02-02 10:13:25.380 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:26 np0005604790 nova_compute[252672]: 2026-02-02 10:13:26.638 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:27.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:13:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:27.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:13:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:13:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:30 np0005604790 nova_compute[252672]: 2026-02-02 10:13:30.424 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:31 np0005604790 nova_compute[252672]: 2026-02-02 10:13:31.640 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:13:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:13:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:32.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:33.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:34 np0005604790 podman[273227]: 2026-02-02 10:13:34.383394777 +0000 UTC m=+0.103929500 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb  2 05:13:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:34.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:13:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:13:35 np0005604790 nova_compute[252672]: 2026-02-02 10:13:35.454 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:36 np0005604790 nova_compute[252672]: 2026-02-02 10:13:36.669 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:37.178Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:13:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:37.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:13:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:38.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:40 np0005604790 nova_compute[252672]: 2026-02-02 10:13:40.459 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:41 np0005604790 nova_compute[252672]: 2026-02-02 10:13:41.672 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:42.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:43 np0005604790 ovn_controller[154631]: 2026-02-02T10:13:43Z|00091|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Feb  2 05:13:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:43.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:44 np0005604790 podman[273290]: 2026-02-02 10:13:44.336515873 +0000 UTC m=+0.055087958 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:13:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:44.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:13:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:13:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:45.386 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:13:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:45.386 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:13:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:13:45.386 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:13:45 np0005604790 nova_compute[252672]: 2026-02-02 10:13:45.462 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:46.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:46 np0005604790 nova_compute[252672]: 2026-02-02 10:13:46.675 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:47.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:13:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:13:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:13:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:13:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:48.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:13:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:13:50 np0005604790 nova_compute[252672]: 2026-02-02 10:13:50.467 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:50.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:51 np0005604790 nova_compute[252672]: 2026-02-02 10:13:51.678 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:52.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:54] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:13:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:13:54] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:13:55 np0005604790 nova_compute[252672]: 2026-02-02 10:13:55.471 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:13:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:13:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:13:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:13:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:13:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:56.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:13:56 np0005604790 nova_compute[252672]: 2026-02-02 10:13:56.712 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:13:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:13:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:13:57.182Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:13:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:13:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000052s ======
Feb  2 05:13:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Feb  2 05:13:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:13:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:13:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:13:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:13:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:13:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:13:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:13:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:00 np0005604790 nova_compute[252672]: 2026-02-02 10:14:00.495 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:00.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:01 np0005604790 nova_compute[252672]: 2026-02-02 10:14:01.713 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:14:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:14:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:02.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:04.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:05 np0005604790 podman[273356]: 2026-02-02 10:14:05.357507389 +0000 UTC m=+0.073377963 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible)
Feb  2 05:14:05 np0005604790 nova_compute[252672]: 2026-02-02 10:14:05.527 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:06 np0005604790 nova_compute[252672]: 2026-02-02 10:14:06.758 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:07.184Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:14:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:07.184Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:14:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:07.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:14:07 np0005604790 systemd-logind[793]: New session 56 of user zuul.
Feb  2 05:14:07 np0005604790 systemd[1]: Started Session 56 of User zuul.
Feb  2 05:14:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:09.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:14:09 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:09 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26575 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.16905 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26522 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 05:14:10 np0005604790 nova_compute[252672]: 2026-02-02 10:14:10.530 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 0 op/s
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26587 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.16914 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:10 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26531 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:14:10 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 05:14:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:11 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 05:14:11 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672189195' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.121044022 +0000 UTC m=+0.044184570 container create 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:14:11 np0005604790 systemd[1]: Started libpod-conmon-1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a.scope.
Feb  2 05:14:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.101711254 +0000 UTC m=+0.024851832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.205345407 +0000 UTC m=+0.128485985 container init 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.211749672 +0000 UTC m=+0.134890220 container start 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.214862143 +0000 UTC m=+0.138002691 container attach 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 05:14:11 np0005604790 clever_boyd[273812]: 167 167
Feb  2 05:14:11 np0005604790 systemd[1]: libpod-1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a.scope: Deactivated successfully.
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.219300037 +0000 UTC m=+0.142440585 container died 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 05:14:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ab7fb03238441f3d1de8033b99c617c71c1760400ce7ff846fbfb56b5d5bcd96-merged.mount: Deactivated successfully.
Feb  2 05:14:11 np0005604790 podman[273790]: 2026-02-02 10:14:11.275879767 +0000 UTC m=+0.199020355 container remove 1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:14:11 np0005604790 systemd[1]: libpod-conmon-1d333081380d4495be4c7cd87d2de4df80643049972a4b7d8088348f88dd101a.scope: Deactivated successfully.
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.480007142 +0000 UTC m=+0.061723903 container create 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 05:14:11 np0005604790 systemd[1]: Started libpod-conmon-24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed.scope.
Feb  2 05:14:11 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.452908483 +0000 UTC m=+0.034625334 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.60047624 +0000 UTC m=+0.182193001 container init 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.612185932 +0000 UTC m=+0.193902693 container start 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.615299182 +0000 UTC m=+0.197015943 container attach 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb  2 05:14:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:11.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:11 np0005604790 nova_compute[252672]: 2026-02-02 10:14:11.759 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:11 np0005604790 ceph-mon[74489]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Feb  2 05:14:11 np0005604790 sweet_brown[273872]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:14:11 np0005604790 sweet_brown[273872]: --> All data devices are unavailable
Feb  2 05:14:11 np0005604790 systemd[1]: libpod-24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed.scope: Deactivated successfully.
Feb  2 05:14:11 np0005604790 podman[273856]: 2026-02-02 10:14:11.981970581 +0000 UTC m=+0.563687372 container died 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:14:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f6fe3ee1d33c91503caeaad2a0b8abf2d36353e4b09b13da8e008e40dc3e373e-merged.mount: Deactivated successfully.
Feb  2 05:14:12 np0005604790 podman[273856]: 2026-02-02 10:14:12.028101941 +0000 UTC m=+0.609818702 container remove 24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:14:12 np0005604790 systemd[1]: libpod-conmon-24af818816d5578dc0a7da8b77364dd67a7667e5422c333dfe01f8fa8feeaeed.scope: Deactivated successfully.
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.305 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.405 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.405 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.405 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.405 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.406 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:14:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 0 op/s
Feb  2 05:14:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.592130809 +0000 UTC m=+0.054294121 container create c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:14:12 np0005604790 systemd[1]: Started libpod-conmon-c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d.scope.
Feb  2 05:14:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:12.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.558849191 +0000 UTC m=+0.021012283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.684397029 +0000 UTC m=+0.146560201 container init c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.690449235 +0000 UTC m=+0.152612317 container start c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:14:12 np0005604790 admiring_payne[274086]: 167 167
Feb  2 05:14:12 np0005604790 systemd[1]: libpod-c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d.scope: Deactivated successfully.
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.698329989 +0000 UTC m=+0.160493101 container attach c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.698888743 +0000 UTC m=+0.161051885 container died c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:14:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-13fc654b154ab318dcce4aa152c85c9318353b11f1e4c587c4ef0086e00b720b-merged.mount: Deactivated successfully.
Feb  2 05:14:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:14:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728669691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:14:12 np0005604790 nova_compute[252672]: 2026-02-02 10:14:12.871 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:14:12 np0005604790 podman[274066]: 2026-02-02 10:14:12.904297042 +0000 UTC m=+0.366460164 container remove c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 05:14:12 np0005604790 systemd[1]: libpod-conmon-c34e9c3e1a5e975699805cd65e7b445d6403abc4daafa76ff29d367e24dadb5d.scope: Deactivated successfully.
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.041 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.043 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4378MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.044 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.044 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.104322312 +0000 UTC m=+0.059677631 container create 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:14:13 np0005604790 systemd[1]: Started libpod-conmon-14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168.scope.
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.068826026 +0000 UTC m=+0.024181335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:13 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f56678d43f23aa121f0e7cd41a080ad7e127da92219273e88b9d7154308ac8b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f56678d43f23aa121f0e7cd41a080ad7e127da92219273e88b9d7154308ac8b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f56678d43f23aa121f0e7cd41a080ad7e127da92219273e88b9d7154308ac8b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:13 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f56678d43f23aa121f0e7cd41a080ad7e127da92219273e88b9d7154308ac8b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.18527828 +0000 UTC m=+0.140633569 container init 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.200358549 +0000 UTC m=+0.155713868 container start 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.205018719 +0000 UTC m=+0.160374028 container attach 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.307 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.308 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.335 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]: {
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:    "1": [
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:        {
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "devices": [
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "/dev/loop3"
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            ],
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "lv_name": "ceph_lv0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "lv_size": "21470642176",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "name": "ceph_lv0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "tags": {
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.cluster_name": "ceph",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.crush_device_class": "",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.encrypted": "0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.osd_id": "1",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.type": "block",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.vdo": "0",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:                "ceph.with_tpm": "0"
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            },
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "type": "block",
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:            "vg_name": "ceph_vg0"
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:        }
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]:    ]
Feb  2 05:14:13 np0005604790 nervous_visvesvaraya[274137]: }
Feb  2 05:14:13 np0005604790 systemd[1]: libpod-14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168.scope: Deactivated successfully.
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.494210219 +0000 UTC m=+0.449565508 container died 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:14:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f56678d43f23aa121f0e7cd41a080ad7e127da92219273e88b9d7154308ac8b2-merged.mount: Deactivated successfully.
Feb  2 05:14:13 np0005604790 podman[274121]: 2026-02-02 10:14:13.589945089 +0000 UTC m=+0.545300398 container remove 14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:14:13 np0005604790 systemd[1]: libpod-conmon-14a5fa8bed2ca9382f3bab0a3e34c929d697e44da5d239af758543c4ac064168.scope: Deactivated successfully.
Feb  2 05:14:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:13.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:14:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612839699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.815 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.824 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.842 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.844 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:14:13 np0005604790 nova_compute[252672]: 2026-02-02 10:14:13.844 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.154011319 +0000 UTC m=+0.052751152 container create 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:14:14 np0005604790 systemd[1]: Started libpod-conmon-4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d.scope.
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.12886955 +0000 UTC m=+0.027609383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.245331965 +0000 UTC m=+0.144071808 container init 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.255872637 +0000 UTC m=+0.154612460 container start 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 05:14:14 np0005604790 elastic_dirac[274290]: 167 167
Feb  2 05:14:14 np0005604790 systemd[1]: libpod-4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d.scope: Deactivated successfully.
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.261314827 +0000 UTC m=+0.160054660 container attach 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.262447296 +0000 UTC m=+0.161187129 container died 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:14:14 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c042239292d4ec5912aa031ac94db41301e44b1dc1f9a339e20633d5a79115a8-merged.mount: Deactivated successfully.
Feb  2 05:14:14 np0005604790 podman[274274]: 2026-02-02 10:14:14.301835612 +0000 UTC m=+0.200575405 container remove 4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_dirac, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 05:14:14 np0005604790 systemd[1]: libpod-conmon-4f3ce39a9fe7e672e72bd4f768f2e408e27ffdaecbeb07eb0c9f1df8d971e35d.scope: Deactivated successfully.
Feb  2 05:14:14 np0005604790 podman[274315]: 2026-02-02 10:14:14.474909427 +0000 UTC m=+0.062272228 container create 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Feb  2 05:14:14 np0005604790 systemd[1]: Started libpod-conmon-16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d.scope.
Feb  2 05:14:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 788 B/s rd, 0 op/s
Feb  2 05:14:14 np0005604790 podman[274315]: 2026-02-02 10:14:14.448714091 +0000 UTC m=+0.036076932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:14:14 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:14:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9b84f8d03e006b75fff2b6d74c2903d31f6fbd9e26287317477e4da9bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9b84f8d03e006b75fff2b6d74c2903d31f6fbd9e26287317477e4da9bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9b84f8d03e006b75fff2b6d74c2903d31f6fbd9e26287317477e4da9bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:14 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9b84f8d03e006b75fff2b6d74c2903d31f6fbd9e26287317477e4da9bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:14:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:14.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:14 np0005604790 podman[274315]: 2026-02-02 10:14:14.960627786 +0000 UTC m=+0.547990597 container init 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:14:14 np0005604790 podman[274315]: 2026-02-02 10:14:14.970653755 +0000 UTC m=+0.558016546 container start 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:14:14 np0005604790 podman[274315]: 2026-02-02 10:14:14.974514604 +0000 UTC m=+0.561877405 container attach 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 05:14:14 np0005604790 podman[274330]: 2026-02-02 10:14:14.981378281 +0000 UTC m=+0.463429185 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.573 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:15 np0005604790 lvm[274440]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:14:15 np0005604790 lvm[274440]: VG ceph_vg0 finished
Feb  2 05:14:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:15.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:15 np0005604790 dreamy_ritchie[274338]: {}
Feb  2 05:14:15 np0005604790 systemd[1]: libpod-16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d.scope: Deactivated successfully.
Feb  2 05:14:15 np0005604790 systemd[1]: libpod-16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d.scope: Consumed 1.137s CPU time.
Feb  2 05:14:15 np0005604790 podman[274315]: 2026-02-02 10:14:15.710766146 +0000 UTC m=+1.298128947 container died 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Feb  2 05:14:15 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c36f5b9b84f8d03e006b75fff2b6d74c2903d31f6fbd9e26287317477e4da9bd-merged.mount: Deactivated successfully.
Feb  2 05:14:15 np0005604790 podman[274315]: 2026-02-02 10:14:15.778670947 +0000 UTC m=+1.366033738 container remove 16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb  2 05:14:15 np0005604790 systemd[1]: libpod-conmon-16b80d76547e2515e45b158bf2d3b2e48290fec24a01254b639f33ce071bab6d.scope: Deactivated successfully.
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.821 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.822 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.822 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.890 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.890 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.891 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.891 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:15 np0005604790 nova_compute[252672]: 2026-02-02 10:14:15.891 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:15 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:16 np0005604790 nova_compute[252672]: 2026-02-02 10:14:16.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 0 op/s
Feb  2 05:14:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:16 np0005604790 nova_compute[252672]: 2026-02-02 10:14:16.799 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:14:17
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', '.nfs', 'volumes', 'default.rgw.meta', '.rgw.root', '.mgr', 'images', 'default.rgw.control']
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:14:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:17.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:14:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Feb  2 05:14:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:14:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:14:17 np0005604790 nova_compute[252672]: 2026-02-02 10:14:17.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:14:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:17.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:14:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:14:18 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:14:18 np0005604790 ovs-vsctl[274529]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  2 05:14:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 788 B/s rd, 0 op/s
Feb  2 05:14:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:14:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:14:18 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26626 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26558 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  2 05:14:19 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  2 05:14:19 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 05:14:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:14:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26638 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:14:19 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26579 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26653 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:19.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:19 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26594 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:19 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: cache status {prefix=cache status} (starting...)
Feb  2 05:14:19 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:19 np0005604790 lvm[274858]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:14:19 np0005604790 lvm[274858]: VG ceph_vg0 finished
Feb  2 05:14:19 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: client ls {prefix=client ls} (starting...)
Feb  2 05:14:19 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26668 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26621 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 nova_compute[252672]: 2026-02-02 10:14:20.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.16980 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 525 B/s rd, 0 op/s
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: damage ls {prefix=damage ls} (starting...)
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:20 np0005604790 nova_compute[252672]: 2026-02-02 10:14:20.575 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:14:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367092476' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:14:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:20.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump loads {prefix=dump loads} (starting...)
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26701 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26663 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17001 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  2 05:14:20 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:20 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:14:20 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271815499' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:21 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17019 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26722 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17028 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:21 np0005604790 nova_compute[252672]: 2026-02-02 10:14:21.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174409702' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:14:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:21 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17052 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: ops {prefix=ops} (starting...)
Feb  2 05:14:21 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:21 np0005604790 nova_compute[252672]: 2026-02-02 10:14:21.800 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  2 05:14:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426249143' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  2 05:14:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2835134920' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17085 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26800 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:14:22.398+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:22 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: session ls {prefix=session ls} (starting...)
Feb  2 05:14:22 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26765 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:14:22.514+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:22 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: status {prefix=status} (starting...)
Feb  2 05:14:22 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17106 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 05:14:22 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592185051' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb  2 05:14:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1350398264' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712376753' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26851 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4269553776' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26831 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147547086' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb  2 05:14:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:23.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26875 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26858 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17169 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:23 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:14:23.945+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:23 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 05:14:23 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1407366583' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26893 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26882 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678781036' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747034242' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=infra.usagestats t=2026-02-02T10:14:24.511979657Z level=info msg="Usage stats are ready to report"
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26926 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:24.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26897 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332516381' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:24] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:24] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  2 05:14:24 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741009425' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb  2 05:14:24 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26953 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26912 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17241 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 nova_compute[252672]: 2026-02-02 10:14:25.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:14:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 05:14:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525163689' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26974 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26936 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 nova_compute[252672]: 2026-02-02 10:14:25.577 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17265 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26995 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 05:14:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508290189' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26954 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17289 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 116 pg[9.16( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=113/114 n=4 ec=57/38 lis/c=113/74 les/c/f=114/75/0 sis=115) [2] r=-1 lpr=115 DELETING pi=[74,115)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.034094 2 0.000334
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 116 pg[9.16( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=113/114 n=4 ec=57/38 lis/c=113/74 les/c/f=114/75/0 sis=115) [2] r=-1 lpr=115 pi=[74,115)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034329 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 116 pg[9.16( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=113/114 n=4 ec=57/38 lis/c=113/74 les/c/f=114/75/0 sis=115) [2] r=-1 lpr=115 pi=[74,115)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.086308 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 116 heartbeat osd_stat(store_statfs(0x4fca4c000/0x0/0x4ffc00000, data 0x133264/0x1cd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876590 data_alloc: 218103808 data_used: 172032
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 24576 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 118 heartbeat osd_stat(store_statfs(0x4fca48000/0x0/0x4ffc00000, data 0x137324/0x1d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=84) [1] r=0 lpr=84 crt=44'1041 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.898486 107 0.000788
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=84) [1] r=0 lpr=84 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.908638 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=84) [1] r=0 lpr=84 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary 49.930474 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=84) [1] r=0 lpr=84 crt=44'1041 mlcod 0'0 active mbc={}] exit Started 49.930532 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=84) [1] r=0 lpr=84 crt=44'1041 mlcod 0'0 active mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102936745s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 active pruub 263.633850098s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] exit Reset 0.000101 1 0.000210
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] exit Start 0.000016 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 119 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119 pruub=15.102888107s) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 263.633850098s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043505 3 0.000080
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.043638 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=119) [0] r=-1 lpr=119 pi=[84,119)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Reset 0.000108 1 0.000239
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000076 1 0.000085
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 120 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.191261292s of 10.377916336s, submitted: 37
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.977580 4 0.000134
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.977872 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=84/85 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=84/84 les/c/f=85/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.008209 5 0.000504
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000156 1 0.000151
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000550 1 0.000081
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.044873 2 0.000098
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 121 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.025705 1 0.000146
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active 0.079842 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary 1.057745 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started 1.057793 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=120) [0]/[1] async=[0] r=0 lpr=120 pi=[84,120)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.928081512s) [0] async=[0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 44'1041 active pruub 266.560791016s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] exit Reset 0.000183 1 0.000316
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] exit Start 0.000015 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 122 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122 pruub=15.927985191s) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.560791016s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fca40000/0x0/0x4ffc00000, data 0x13b422/0x1d9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 893582 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.769555 6 0.000268
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001479 1 0.000129
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 DELETING pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035249 3 0.000223
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036815 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 123 pg[9.1a( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=120/121 n=4 ec=57/38 lis/c=120/84 les/c/f=121/85/0 sis=122) [0] r=-1 lpr=122 pi=[84,122)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.806476 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca3b000/0x0/0x4ffc00000, data 0x13f39c/0x1df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2007040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 123 handle_osd_map epochs [124,125], i have 123, src has [1,125]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 1916928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 1908736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 897209 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fca30000/0x0/0x4ffc00000, data 0x14749d/0x1ea000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 1900544 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 1892352 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=90) [1] r=0 lpr=90 crt=44'1041 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 47.295979 112 0.000747
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=90) [1] r=0 lpr=90 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary/Active 47.310654 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=90) [1] r=0 lpr=90 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary 48.325930 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=90) [1] r=0 lpr=90 crt=44'1041 mlcod 0'0 active mbc={}] exit Started 48.326036 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=90) [1] r=0 lpr=90 crt=44'1041 mlcod 0'0 active mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703561783s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 active pruub 266.976745605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] exit Reset 0.000301 1 0.000544
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] exit Start 0.000066 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 128 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128 pruub=8.703369141s) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 266.976745605s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 128 handle_osd_map epochs [127,128], i have 128, src has [1,128]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fca2a000/0x0/0x4ffc00000, data 0x14b548/0x1f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.746706 3 0.000187
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.746892 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=128) [2] r=-1 lpr=128 pi=[90,128)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Reset 0.000217 1 0.000310
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Start 0.000065 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002103 2 0.000228
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000061 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000048 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 129 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 1867776 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.028473 3 0.000244
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.031048 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=90/91 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=90/90 les/c/f=91/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.008047 5 0.001184
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000134 1 0.000110
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001497 1 0.000096
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.047791 2 0.000123
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 130 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 1851392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.843511581s of 10.085615158s, submitted: 70
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.529359 1 0.000180
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active 0.587389 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary 1.618829 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started 1.619000 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=129) [2]/[1] async=[2] r=0 lpr=129 pi=[90,129)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420552254s) [2] async=[2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 44'1041 active pruub 276.060241699s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] exit Reset 0.000195 1 0.000356
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] exit Start 0.000026 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 131 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131 pruub=15.420440674s) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.060241699s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1835008 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911639 data_alloc: 218103808 data_used: 188416
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027098 7 0.000149
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000166 1 0.000145
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 DELETING pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.041655 2 0.000572
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.041956 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 132 pg[9.1d( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=129/130 n=5 ec=57/38 lis/c=129/90 les/c/f=130/91/0 sis=131) [2] r=-1 lpr=131 pi=[90,131)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.069193 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1835008 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1810432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=74) [1] r=0 lpr=74 crt=44'1041 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 79.641155 180 0.001125
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=74) [1] r=0 lpr=74 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary/Active 79.659418 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=74) [1] r=0 lpr=74 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary 80.671755 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=74) [1] r=0 lpr=74 crt=44'1041 mlcod 0'0 active mbc={}] exit Started 80.671830 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=74) [1] r=0 lpr=74 crt=44'1041 mlcod 0'0 active mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360290527s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 active pruub 271.645385742s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] exit Reset 0.000280 1 0.000427
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] exit Start 0.000109 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 133 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133 pruub=8.360084534s) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 271.645385742s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025118 3 0.000239
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.025326 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=133) [0] r=-1 lpr=133 pi=[74,133)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Reset 0.000496 1 0.000562
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Start 0.000072 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003841 2 0.000158
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000064 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 134 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fca1e000/0x0/0x4ffc00000, data 0x155440/0x1fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1785856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fca1a000/0x0/0x4ffc00000, data 0x157414/0x200000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 134 handle_osd_map epochs [134,135], i have 135, src has [1,135]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=96) [1] r=0 lpr=96 crt=44'1041 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 44.701691 115 0.000726
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=96) [1] r=0 lpr=96 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary/Active 44.707199 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=96) [1] r=0 lpr=96 crt=44'1041 mlcod 0'0 active mbc={}] exit Started/Primary 45.102017 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=96) [1] r=0 lpr=96 crt=44'1041 mlcod 0'0 active mbc={}] exit Started 45.102111 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=96) [1] r=0 lpr=96 crt=44'1041 mlcod 0'0 active mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299188614s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 active pruub 276.622467041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007810 3 0.000176
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012053 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=74/75 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] exit Reset 0.000229 1 0.000387
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] exit Start 0.000022 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135 pruub=11.299092293s) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 276.622467041s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=74/74 les/c/f=75/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.024616 5 0.001090
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000217 1 0.000172
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000717 1 0.000107
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.047166 2 0.000062
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 135 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.352983 3 0.000156
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.353104 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=135) [0] r=-1 lpr=135 pi=[96,135)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.279881 1 0.000145
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Reset 0.000128 1 0.000171
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active 0.353255 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary 1.365478 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started 1.365824 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=134) [0]/[1] async=[0] r=0 lpr=134 pi=[74,134)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670845032s) [0] async=[0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 44'1041 active pruub 281.347961426s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] exit Reset 0.000159 1 0.000663
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] exit Start 0.000026 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136 pruub=15.670753479s) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 281.347961426s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.016350 2 0.000048
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000084 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000025 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 136 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1671168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.968467 3 0.000255
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.985077 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=96/97 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=96/96 les/c/f=97/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.007615 5 0.000946
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000152 1 0.000087
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000603 1 0.000038
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.995970 7 0.000193
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042509 2 0.000053
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.040400 1 0.000039
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 DELETING pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044663 2 0.000225
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.085108 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 137 pg[9.1e( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=134/135 n=5 ec=57/38 lis/c=134/74 les/c/f=135/75/0 sis=136) [0] r=-1 lpr=136 pi=[74,136)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081147 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1630208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.971595 1 0.000229
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary/Active 1.023367 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started/Primary 2.008476 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] exit Started 2.008508 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[96,136)/1 crt=44'1041 mlcod 44'1041 active+remapped mbc={255={}}] enter Reset
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984783173s) [0] async=[0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 44'1041 active pruub 282.670013428s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] exit Reset 0.000144 1 0.000209
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] enter Started
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] enter Start
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] state<Start>: transitioning to Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] exit Start 0.000011 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 138 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138 pruub=14.984680176s) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY pruub 282.670013428s@ mbc={}] enter Started/Stray
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1622016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.136635 7 0.000214
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000197 1 0.000163
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 DELETING pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.078816 2 0.000254
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.079106 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 pg_epoch: 139 pg[9.1f( v 44'1041 (0'0,44'1041] lb MIN local-lis/les=136/137 n=5 ec=57/38 lis/c=136/96 les/c/f=137/97/0 sis=138) [0] r=-1 lpr=138 pi=[96,138)/1 crt=44'1041 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.215857 0 0.000000
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 1851392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0d000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0d000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1843200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 1843200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904532 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.526349068s of 10.849187851s, submitted: 78
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0d000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 1835008 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1818624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1818624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1810432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1810432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906716 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:25 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 1810432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1777664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc handle_mgr_map Got map version 30
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1282799344,v1:192.168.122.100:6801/1282799344]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1818624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 1818624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 1802240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 1794048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 1794048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1785856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 1785856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 1777664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1769472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 1769472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1753088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1753088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 1753088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 1744896 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 1744896 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 1728512 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 1720320 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 1720320 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1712128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f17237c00 session 0x564f15762d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 1712128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1695744 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 1695744 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 1687552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1679360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1679360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1679360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1662976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1662976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1654784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1654784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1654784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1630208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1630208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1630208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1622016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1622016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906125 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1605632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1605632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1605632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.732524872s of 53.753044128s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1597440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1597440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905534 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 1597440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1589248 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1589248 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1581056 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1581056 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1572864 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1572864 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1572864 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1564672 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1564672 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1556480 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1548288 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 1548288 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1540096 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1540096 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1540096 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1531904 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1531904 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f16bf9800 session 0x564f17304000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1531904 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1515520 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 1507328 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1499136 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1499136 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1499136 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1490944 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14586000 session 0x564f14565e00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9d400 session 0x564f157b2780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1490944 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1490944 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1482752 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 1482752 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1466368 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f157b5800 session 0x564f14a33680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1466368 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1458176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1458176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1458176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1449984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1449984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1449984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1441792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1441792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 1433600 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904943 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1409024 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1409024 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.328922272s of 44.338092804s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1392640 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1392640 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1384448 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906455 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1384448 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1376256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1376256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1376256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1368064 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906455 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 1368064 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1359872 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1359872 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 1359872 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.075464249s of 12.079653740s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1351680 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1351680 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1343488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1343488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1343488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1327104 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1327104 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1327104 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1318912 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1318912 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 1310720 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1302528 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1302528 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1294336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1294336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1294336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1286144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1286144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1286144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1277952 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1277952 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1269760 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 1269760 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1261568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1261568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1261568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1245184 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 1245184 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1236992 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1236992 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1236992 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1220608 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1212416 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1212416 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1212416 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1196032 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f167ea400 session 0x564f177603c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1196032 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1187840 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1187840 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1179648 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1179648 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1171456 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1171456 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1171456 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1163264 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1163264 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1163264 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1155072 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1155072 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1155072 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1146880 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905864 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1146880 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1138688 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.676567078s of 52.684104919s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1138688 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1138688 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1130496 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1122304 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1122304 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1114112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1114112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1114112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1105920 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1105920 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1097728 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1097728 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1089536 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1089536 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1089536 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1089536 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1081344 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1081344 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1073152 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1073152 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1064960 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1064960 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1064960 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1048576 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 1048576 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1040384 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1040384 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 1040384 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1032192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1032192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 1032192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1024000 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1024000 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 1024000 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1015808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 1015808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1007616 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1007616 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1007616 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 999424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 999424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 999424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 991232 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 991232 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 983040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 983040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 983040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 974848 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 966656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 966656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 958464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 958464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 958464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 950272 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 950272 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 942080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 942080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 933888 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 933888 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 925696 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 925696 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 925696 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 876544 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 876544 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 868352 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 868352 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 868352 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 843776 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 843776 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 835584 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 835584 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 835584 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 819200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 819200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 819200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 802816 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 802816 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 737280 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 907376 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 737280 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 729088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 729088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.974098206s of 116.977508545s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 729088 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 720896 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 720896 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 712704 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 712704 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 712704 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 704512 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 704512 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 696320 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 696320 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 696320 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 688128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 688128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 688128 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 679936 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 679936 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 679936 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 671744 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 671744 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1682c000 session 0x564f16a9d2c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 622592 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 622592 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 622592 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 606208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 606208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 606208 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 573440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 573440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 573440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 573440 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6346 writes, 1060 syncs, 5.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6346 writes, 26K keys, 6346 commit groups, 1.0 writes per commit group, ingest: 19.62 MB, 0.03 MB/s#012Interval WAL: 6346 writes, 1060 syncs, 5.99 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 507904 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 507904 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 491520 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 491520 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 483328 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 483328 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 483328 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 475136 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 475136 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 466944 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 466944 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 458752 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 442368 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 442368 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 434176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 434176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 434176 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 425984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 906785 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 425984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 425984 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 417792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 417792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.088645935s of 71.093086243s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 417792 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 409600 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 409600 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 401408 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 401408 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 393216 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 385024 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 385024 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 376832 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 376832 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 368640 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 360448 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 360448 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 352256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 352256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 352256 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 344064 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 344064 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 344064 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 335872 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 335872 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 327680 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 327680 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 319488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 319488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 319488 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 311296 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 311296 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 311296 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 303104 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 294912 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14ba0400 session 0x564f168c3680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 286720 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 286720 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 286720 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 278528 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 278528 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 270336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 270336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 270336 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 262144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 262144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 262144 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 253952 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 253952 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 253952 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 245760 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 245760 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 237568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 237568 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 229376 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 229376 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.575641632s of 51.579601288s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 212992 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 1171456 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908297 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 851968 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 843776 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 835584 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.349395752s of 13.540295601s, submitted: 245
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 827392 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 819200 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 802816 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 794624 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 786432 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 770048 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 761856 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 753664 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14587800 session 0x564f16cbfa40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 745472 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 737280 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 729088 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 720896 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 712704 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f167e5c00 session 0x564f17484000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 704512 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909218 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 696320 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 688128 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.855998993s of 34.859409332s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910730 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910730 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 671744 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.040864944s of 10.045613289s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911651 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911651 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1687a400 session 0x564f17058f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 655360 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911060 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.314334869s of 37.328449249s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912572 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911981 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.101675987s of 12.109487534s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9d400 session 0x564f168832c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.760246277s of 97.782371521s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f16bfac00 session 0x564f16908780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1687b800 session 0x564f16cb43c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.736358643s of 27.743860245s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914414 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915926 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.323648453s of 10.332687378s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14a8f800 session 0x564f17304f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.660980225s of 33.682113647s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916847 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916256 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f167e8000 session 0x564f16a9cd20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1682e400 session 0x564f16770d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 72.601402283s of 72.615432739s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917177 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9d000 session 0x564f16a82960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.518285751s of 31.530691147s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.605617523s of 32.614505768s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f16830800 session 0x564f16a94f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.637229919s of 55.641765594s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9dc00 session 0x564f167710e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.884658813s of 51.889488220s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6867 writes, 1311 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 521 writes, 817 keys, 521 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s#012Interval WAL: 521 writes, 251 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.083230972s of 20.095113754s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922373 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.688781738s of 58.700057983s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 1245184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread fragmentation_score=0.000024 took=0.000080s
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f169ee800 session 0x564f17761680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.293060303s of 72.358764648s, submitted: 253
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921521 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0a000/0x0/0x4ffc00000, data 0x1634ac/0x211000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 142 ms_handle_reset con 0x564f16bf7400 session 0x564f17760d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993058 data_alloc: 218103808 data_used: 180224
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 8208384 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.836822510s of 10.181390762s, submitted: 49
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 143 ms_handle_reset con 0x564f1682d800 session 0x564f177612c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16392192 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fbd8e000/0x0/0x4ffc00000, data 0xdd9852/0xe8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026508 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fbd8e000/0x0/0x4ffc00000, data 0xdd9852/0xe8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.726776123s of 38.770790100s, submitted: 28
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030594 data_alloc: 218103808 data_used: 184320
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 15286272 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 15286272 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ecc00 session 0x564f167712c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f16bf5800 session 0x564f16770000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ec800 session 0x564f157b3a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 15294464 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ec800 session 0x564f157b2b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ecc00 session 0x564f16883a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 15269888 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f16bf5800 session 0x564f14565a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 15269888 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035392 data_alloc: 218103808 data_used: 188416
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd87000/0x0/0x4ffc00000, data 0xddd918/0xe94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 15261696 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f16bf7400 session 0x564f14564960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 17547264 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec000 session 0x564f16908000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec000 session 0x564f16909680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec800 session 0x564f13f0a000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ecc00 session 0x564f17305680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbc70000/0x0/0x4ffc00000, data 0xef3a63/0xfab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f14b9e400 session 0x564f165b54a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f14b9c400 session 0x564f165b4960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 188416
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541642189s of 10.736811638s, submitted: 30
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5a6000/0x0/0x4ffc00000, data 0x15bda63/0x1675000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e400 session 0x564f177434a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 17522688 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f17742d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 17219584 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 17170432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150903 data_alloc: 218103808 data_used: 7323648
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb57f000/0x0/0x4ffc00000, data 0x15e3a45/0x169d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150903 data_alloc: 218103808 data_used: 7323648
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb57f000/0x0/0x4ffc00000, data 0x15e3a45/0x169d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.760478020s of 13.797927856s, submitted: 20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 9216000 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177579 data_alloc: 218103808 data_used: 7467008
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 8560640 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb298000/0x0/0x4ffc00000, data 0x18caa45/0x1984000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183745 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185257 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.059681892s of 13.153066635s, submitted: 20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183138 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183138 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687b000 session 0x564f157625a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14586000 session 0x564f15762780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8617984 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8617984 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.025923729s of 10.087536812s, submitted: 9
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f1576a1e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f1576a000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14586000 session 0x564f1576a780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e400 session 0x564f168832c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f1576b2c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20c000/0x0/0x4ffc00000, data 0x1956a45/0x1a10000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190221 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20c000/0x0/0x4ffc00000, data 0x1956a45/0x1a10000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190221 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfbc00 session 0x564f153de5a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 8568832 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 8749056 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20b000/0x0/0x4ffc00000, data 0x1956a68/0x1a11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195806 data_alloc: 218103808 data_used: 7987200
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20b000/0x0/0x4ffc00000, data 0x1956a68/0x1a11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195806 data_alloc: 218103808 data_used: 7987200
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8c00 session 0x564f17742960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.765953064s of 19.861989975s, submitted: 13
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216448 data_alloc: 218103808 data_used: 7991296
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 8142848 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf5000 session 0x564f16a94d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216448 data_alloc: 218103808 data_used: 7991296
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833400 session 0x564f153def00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b4400 session 0x564f14a10b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.828751564s of 10.000874519s, submitted: 36
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 8257536 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e2c00 session 0x564f172410e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 8224768 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 8224768 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191022 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef400 session 0x564f16cb5a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682f000 session 0x564f172412c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192142 data_alloc: 218103808 data_used: 7462912
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 8339456 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f17241680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.902652740s of 12.135910988s, submitted: 46
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061527 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061527 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060345 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.323100090s of 15.335790634s, submitted: 3
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf5000 session 0x564f153de000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f153f2f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f16a830e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f16cb4780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f1765f4a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087666 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087666 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f13f0be00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b5800 session 0x564f13f0a3c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b5800 session 0x564f13f0a000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f16908000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087798 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 12255232 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113182 data_alloc: 218103808 data_used: 3969024
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113182 data_alloc: 218103808 data_used: 3969024
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.635030746s of 20.722810745s, submitted: 13
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91791360 unmapped: 8323072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 9461760 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eef000/0x0/0x4ffc00000, data 0x16c4a35/0x177d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161968 data_alloc: 218103808 data_used: 4472832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eef000/0x0/0x4ffc00000, data 0x16c4a35/0x177d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 7004160 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 7004160 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161025 data_alloc: 218103808 data_used: 4534272
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.114570618s of 26.307558060s, submitted: 56
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27022 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93241344 unmapped: 6873088 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f14564960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16a82960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167edc00 session 0x564f173043c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.828050613s of 29.944892883s, submitted: 30
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f1576ba40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 9609216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f16771860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16882000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833c00 session 0x564f16eabe00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682e800 session 0x564f16908780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f15433a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f168e41e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129453 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16a9c3c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16cb50e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ea800 session 0x564f16a82f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f16a82b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f16eae000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90316800 unmapped: 19243008 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90316800 unmapped: 19243008 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131267 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90324992 unmapped: 19234816 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 13615104 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190091 data_alloc: 218103808 data_used: 8843264
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190091 data_alloc: 218103808 data_used: 8843264
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.334577560s of 19.459070206s, submitted: 16
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,1,2])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9594000/0x0/0x4ffc00000, data 0x201ea45/0x20d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278225 data_alloc: 234881024 data_used: 9875456
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9594000/0x0/0x4ffc00000, data 0x201ea45/0x20d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271937 data_alloc: 234881024 data_used: 9879552
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9591000/0x0/0x4ffc00000, data 0x2021a45/0x20db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.393581390s of 13.718131065s, submitted: 87
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271234 data_alloc: 234881024 data_used: 9879552
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ea400 session 0x564f16eaeb40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16eaf0e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314240 data_alloc: 234881024 data_used: 9879552
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f164d9400 session 0x564f16eaf2c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16eaf4a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f16eaf680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314240 data_alloc: 234881024 data_used: 9879552
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6400 session 0x564f16eafa40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 12042240 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 6676480 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355280 data_alloc: 234881024 data_used: 15953920
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 6668288 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 6651904 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 6619136 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.223644257s of 18.297273636s, submitted: 6
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fba000/0x0/0x4ffc00000, data 0x25f7a45/0x26b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355920 data_alloc: 234881024 data_used: 15953920
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fba000/0x0/0x4ffc00000, data 0x25f7a45/0x26b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 6545408 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 6119424 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439154 data_alloc: 234881024 data_used: 16728064
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 5955584 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8577000/0x0/0x4ffc00000, data 0x303ba45/0x30f5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830c00 session 0x564f16cb50e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f177612c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443438 data_alloc: 234881024 data_used: 16879616
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.683537483s of 13.015701294s, submitted: 49
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 7872512 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f172401e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f856f000/0x0/0x4ffc00000, data 0x3043a45/0x30fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279090 data_alloc: 234881024 data_used: 9879552
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f1576be00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f157b34a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104251392 unmapped: 13320192 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a000 session 0x564f16a82d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16530000 session 0x564f16a9d2c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16a9c5a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16a9de00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16a9d860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.182113647s of 29.294746399s, submitted: 35
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a000 session 0x564f16a9c000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682f800 session 0x564f14c30b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f14a30780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16cb3a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16cb2000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173406 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9cc7000/0x0/0x4ffc00000, data 0x18eba97/0x19a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173406 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f16cb3860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 33603584 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 33587200 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 32522240 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254751 data_alloc: 234881024 data_used: 11681792
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254751 data_alloc: 234881024 data_used: 11681792
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.471843719s of 17.623472214s, submitted: 36
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 23617536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 22454272 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24616960 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322987 data_alloc: 234881024 data_used: 12836864
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24616960 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323003 data_alloc: 234881024 data_used: 12836864
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 24518656 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323307 data_alloc: 234881024 data_used: 12845056
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 24477696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 24477696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323307 data_alloc: 234881024 data_used: 12845056
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 24469504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 24469504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 24436736 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f14bdfa40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323915 data_alloc: 234881024 data_used: 12861440
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24649728 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24649728 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323915 data_alloc: 234881024 data_used: 12861440
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831400 session 0x564f14a31a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4000 session 0x564f157b2b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f157b3e00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f157b3a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.930034637s of 31.205894470s, submitted: 99
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833000 session 0x564f16cb5a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f16cb5c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4000 session 0x564f17760960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831400 session 0x564f17761c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f17241c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9273000/0x0/0x4ffc00000, data 0x233cb2c/0x23f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356101 data_alloc: 234881024 data_used: 12869632
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357613 data_alloc: 234881024 data_used: 12869632
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f14a06780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 23134208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108093440 unmapped: 23126016 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 20488192 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384918 data_alloc: 234881024 data_used: 16629760
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384918 data_alloc: 234881024 data_used: 16629760
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 20422656 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.881280899s of 21.064805984s, submitted: 43
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 19095552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406306 data_alloc: 234881024 data_used: 16834560
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 18833408 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f909c000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412016 data_alloc: 234881024 data_used: 16744448
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f909c000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.176107407s of 10.434784889s, submitted: 55
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 17006592 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16990208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16990208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.345724106s of 15.359895706s, submitted: 14
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 16957440 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 16941056 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 16941056 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.921648979s of 12.932921410s, submitted: 2
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f16cb50e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f153243c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f170bb860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f959a000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333008 data_alloc: 234881024 data_used: 12910592
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f959a000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333052 data_alloc: 234881024 data_used: 12910592
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.651408195s of 10.805692673s, submitted: 44
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17237800 session 0x564f16cb2960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e2400 session 0x564f14bdfe00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f14c301e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8845 writes, 33K keys, 8845 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8845 writes, 2156 syncs, 4.10 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1978 writes, 6320 keys, 1978 commit groups, 1.0 writes per commit group, ingest: 6.41 MB, 0.01 MB/s#012Interval WAL: 1978 writes, 845 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236400 session 0x564f14a065a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1000 session 0x564f16cb43c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830000 session 0x564f17305860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfb000 session 0x564f14a10780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.039520264s of 19.160972595s, submitted: 37
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f1659fa40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1000 session 0x564f1576a1e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830000 session 0x564f1576b680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236400 session 0x564f165b5c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfa400 session 0x564f17304000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136117 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f173043c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136117 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103342080 unmapped: 27877376 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103342080 unmapped: 27877376 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f16eaf0e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169eec00 session 0x564f15762780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980074883s of 10.067814827s, submitted: 28
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f16a83860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113795 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113795 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b4400 session 0x564f168e5860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f16a82960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16a83a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f16a832c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169eec00 session 0x564f13f0ab40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f17485c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f17058960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f17058000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f168c25a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16908000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177657 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fdc000/0x0/0x4ffc00000, data 0x15d7a35/0x1690000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.231132507s of 13.376890182s, submitted: 36
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf2800 session 0x564f169083c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 28819456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 28803072 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226630 data_alloc: 218103808 data_used: 7315456
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1576ba40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f14c30780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fdc000/0x0/0x4ffc00000, data 0x15d7a35/0x1690000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9ec00 session 0x564f13f0a3c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f145641e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9c000 session 0x564f16eaef00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f1576bc20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1659e000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.669031143s of 29.789909363s, submitted: 34
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9ec00 session 0x564f14a06000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f14a070e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a8fc00 session 0x564f16eaeb40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f14a06b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f165b41e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143766 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f14bdef00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 28508160 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166047 data_alloc: 218103808 data_used: 3338240
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.037717819s of 11.168728828s, submitted: 36
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 28549120 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102785024 unmapped: 28434432 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 28237824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165999 data_alloc: 218103808 data_used: 3342336
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 28336128 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 28336128 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x1193aba/0x124e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.502199173s of 19.644466400s, submitted: 280
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3800 session 0x564f15762000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6c00 session 0x564f153250e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f14a31680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236c00 session 0x564f17058960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f16eaf0e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170bec00 session 0x564f15762780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170be400 session 0x564f16a9de00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.125001907s of 19.304050446s, submitted: 47
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170be400 session 0x564f1659e000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f16a82780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f16908f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170bec00 session 0x564f16eaaf00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236c00 session 0x564f16a83680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7000 session 0x564f169090e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f153df860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f14c30d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187707 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec400 session 0x564f153241e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3800 session 0x564f1576a780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102629376 unmapped: 28590080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243984 data_alloc: 218103808 data_used: 8093696
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243984 data_alloc: 218103808 data_used: 8093696
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.325160980s of 16.559110641s, submitted: 45
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17c06c00 session 0x564f16a82b40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f16a943c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f16a94d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec400 session 0x564f16a95680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3800 session 0x564f16a954a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106938368 unmapped: 31637504 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354235 data_alloc: 218103808 data_used: 8085504
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f931e000/0x0/0x4ffc00000, data 0x2289b18/0x2346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f1576a780
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f153250e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354251 data_alloc: 218103808 data_used: 8085504
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687b000 session 0x564f153241e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ed000 session 0x564f14c30d20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 30654464 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419566 data_alloc: 234881024 data_used: 18124800
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419566 data_alloc: 234881024 data_used: 18124800
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397748947s of 19.836757660s, submitted: 85
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 19546112 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8d0b000/0x0/0x4ffc00000, data 0x28a3b28/0x2961000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 18907136 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 18726912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 18554880 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471376 data_alloc: 234881024 data_used: 18452480
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f164d8800 session 0x564f177603c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 18554880 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 18546688 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28c4b28/0x2982000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 18513920 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 18505728 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 18505728 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468656 data_alloc: 234881024 data_used: 18456576
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28c4b28/0x2982000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.684499741s of 12.516972542s, submitted: 75
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18456576
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18456576
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 18874368 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4c00 session 0x564f16908f00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830400 session 0x564f172412c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f14565e00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287747 data_alloc: 218103808 data_used: 8089600
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9a8c000/0x0/0x4ffc00000, data 0x18f8ab6/0x19b4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.688279152s of 13.011224747s, submitted: 41
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e000 session 0x564f173045a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6400 session 0x564f170ba3c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 31236096 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9cc00 session 0x564f16908000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc ms_handle_reset ms_handle_reset con 0x564f157b4000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1282799344
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1282799344,v1:192.168.122.100:6801/1282799344]
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: mgrc handle_mgr_configure stats_period=5
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167eb400 session 0x564f14a06960
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba1800 session 0x564f1576ab40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a400 session 0x564f15433860
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7c00 session 0x564f15433680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f15433a40
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f154330e0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.805261612s of 30.015821457s, submitted: 61
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f154334a0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f15432000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f154332c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7c00 session 0x564f15433c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f16a94000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f15762000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 31088640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1659e000
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f16a83680
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f16eaaf00
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.881891251s of 18.886398315s, submitted: 1
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 31055872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31039488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160433 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160433 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830400 session 0x564f157b32c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3400 session 0x564f17485c20
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277494431s of 10.306180954s, submitted: 8
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f177423c0
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 31006720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 31006720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}'
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}'
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 30695424 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 31023104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:14:26 np0005604790 ceph-osd[82705]: do_command 'log dump' '{prefix=log dump}'
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26972 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141533306' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17310 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27043 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.26993 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025381601' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17331 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:26 np0005604790 nova_compute[252672]: 2026-02-02 10:14:26.803 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:26 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27061 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 05:14:26 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2467961628' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27002 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146719867' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17355 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:27.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17370 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27085 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17382 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865929111' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb  2 05:14:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:27.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:27 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17403 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb  2 05:14:27 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2341655505' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb  2 05:14:28 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17424 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:28 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 05:14:28 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398363290' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb  2 05:14:28 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17436 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:28.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102532455' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb  2 05:14:29 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17463 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2900426886' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774745979' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb  2 05:14:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:29.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb  2 05:14:29 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109358869' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781432572' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399529638' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb  2 05:14:30 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27235 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:30 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230110462' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27209 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285535699' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:30 np0005604790 nova_compute[252672]: 2026-02-02 10:14:30.581 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27259 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27218 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2715461821' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb  2 05:14:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2712029861' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb  2 05:14:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27230 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27295 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1705546128' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670796580' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27272 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27319 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:31.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366452837' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb  2 05:14:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898423268' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb  2 05:14:31 np0005604790 nova_compute[252672]: 2026-02-02 10:14:31.803 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:31 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27290 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27337 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17655 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:14:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27314 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27358 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17667 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27338 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:32.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27370 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:32 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17682 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27359 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:14:33 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:14:33 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264917689' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb  2 05:14:33 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17754 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Feb  2 05:14:33 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744122681' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17793 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3693922191' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410682327' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17817 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27499 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27479 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:14:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:14:34 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:14:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:14:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:14:35 np0005604790 nova_compute[252672]: 2026-02-02 10:14:35.584 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:35.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Feb  2 05:14:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529115186' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb  2 05:14:36 np0005604790 podman[277534]: 2026-02-02 10:14:36.40024349 +0000 UTC m=+0.119946435 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 05:14:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17922 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:36.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27586 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27563 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:36 np0005604790 nova_compute[252672]: 2026-02-02 10:14:36.806 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb  2 05:14:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290487711' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Feb  2 05:14:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:37.188Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:14:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:37.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:14:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:37.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:14:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Feb  2 05:14:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241760720' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Feb  2 05:14:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:37.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb  2 05:14:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220927735' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Feb  2 05:14:37 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27622 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27625 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb  2 05:14:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983266021' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.17970 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27649 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:38.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27626 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27661 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Feb  2 05:14:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424509917' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Feb  2 05:14:39 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27641 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Feb  2 05:14:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278998420' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Feb  2 05:14:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:39 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18009 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:39 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27677 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27683 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Feb  2 05:14:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707707503' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27697 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:40 np0005604790 nova_compute[252672]: 2026-02-02 10:14:40.589 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27695 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:40.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18036 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Feb  2 05:14:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2058255725' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Feb  2 05:14:41 np0005604790 ovs-appctl[278908]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb  2 05:14:41 np0005604790 ovs-appctl[278917]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb  2 05:14:41 np0005604790 ovs-appctl[278924]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Feb  2 05:14:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Feb  2 05:14:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752273424' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Feb  2 05:14:41 np0005604790 nova_compute[252672]: 2026-02-02 10:14:41.807 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27724 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18069 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27740 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18075 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18081 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27752 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:42.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:42 np0005604790 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb  2 05:14:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Feb  2 05:14:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4009326478' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Feb  2 05:14:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Feb  2 05:14:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720502581' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Feb  2 05:14:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18123 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18132 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:14:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27778 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27797 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 05:14:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357825108' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb  2 05:14:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:44.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:14:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:14:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Feb  2 05:14:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863452279' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Feb  2 05:14:45 np0005604790 podman[280272]: 2026-02-02 10:14:45.36228434 +0000 UTC m=+0.080246021 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb  2 05:14:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:14:45.387 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:14:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:14:45.388 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:14:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:14:45.388 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:14:45 np0005604790 nova_compute[252672]: 2026-02-02 10:14:45.592 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Feb  2 05:14:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925975289' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Feb  2 05:14:46 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18186 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:46 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27823 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:46.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:46 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27854 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:46 np0005604790 nova_compute[252672]: 2026-02-02 10:14:46.809 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Feb  2 05:14:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423557611' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb  2 05:14:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:47.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:14:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:47.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291812909' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Feb  2 05:14:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718334318' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27859 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27887 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Feb  2 05:14:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919304723' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Feb  2 05:14:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18237 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27880 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27908 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:48.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27892 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27923 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905466608' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806713623' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Feb  2 05:14:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Feb  2 05:14:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2696044831' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb  2 05:14:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18285 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27928 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27931 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Feb  2 05:14:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280711553' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27937 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27956 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:50 np0005604790 nova_compute[252672]: 2026-02-02 10:14:50.595 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18309 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27965 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27964 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27980 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Feb  2 05:14:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449344092' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:51.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:51 np0005604790 nova_compute[252672]: 2026-02-02 10:14:51.811 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27986 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Feb  2 05:14:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134255602' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Feb  2 05:14:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.27973 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18357 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 05:14:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3010295768' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:52 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 05:14:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:52.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:52 np0005604790 systemd[1]: Starting Time & Date Service...
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18366 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:14:52 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:14:52 np0005604790 systemd[1]: Started Time & Date Service.
Feb  2 05:14:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Feb  2 05:14:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316092978' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb  2 05:14:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Feb  2 05:14:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159882353' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Feb  2 05:14:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:14:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:14:53 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18393 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:54 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18399 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:14:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb  2 05:14:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895016293' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Feb  2 05:14:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:54] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:14:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:14:54] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732477402' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3732477402' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Feb  2 05:14:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205610481' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Feb  2 05:14:55 np0005604790 nova_compute[252672]: 2026-02-02 10:14:55.637 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:14:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:14:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:14:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:14:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:14:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:14:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:56 np0005604790 nova_compute[252672]: 2026-02-02 10:14:56.813 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:14:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:14:57.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:14:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:14:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:14:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:14:58.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:14:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:14:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:14:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:14:59.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:00 np0005604790 nova_compute[252672]: 2026-02-02 10:15:00.641 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:00.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:01 np0005604790 nova_compute[252672]: 2026-02-02 10:15:01.815 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:15:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:15:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:02.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:03.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:04.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:05 np0005604790 nova_compute[252672]: 2026-02-02 10:15:05.644 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:15:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:06.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:15:06 np0005604790 nova_compute[252672]: 2026-02-02 10:15:06.818 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:07.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:07 np0005604790 podman[281409]: 2026-02-02 10:15:07.369334871 +0000 UTC m=+0.089225123 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb  2 05:15:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:08.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:08 np0005604790 ceph-mgr[74785]: [dashboard INFO request] [192.168.122.100:47576] [POST] [200] [0.002s] [4.0B] [95a9bc4d-4b2e-4e93-97a4-b22056a67081] /api/prometheus_receiver
Feb  2 05:15:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:10 np0005604790 nova_compute[252672]: 2026-02-02 10:15:10.649 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:10.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:11 np0005604790 nova_compute[252672]: 2026-02-02 10:15:11.822 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.304 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.305 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.305 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.305 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.306 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:15:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:12.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:15:12 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4029781014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.754 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.871 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.873 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4339MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.874 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.874 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.945 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.945 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:15:12 np0005604790 nova_compute[252672]: 2026-02-02 10:15:12.963 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:15:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:15:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992078019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:15:13 np0005604790 nova_compute[252672]: 2026-02-02 10:15:13.418 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:15:13 np0005604790 nova_compute[252672]: 2026-02-02 10:15:13.424 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:15:13 np0005604790 nova_compute[252672]: 2026-02-02 10:15:13.600 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:15:13 np0005604790 nova_compute[252672]: 2026-02-02 10:15:13.603 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:15:13 np0005604790 nova_compute[252672]: 2026-02-02 10:15:13.604 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:15:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:14.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:15 np0005604790 nova_compute[252672]: 2026-02-02 10:15:15.604 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:15 np0005604790 nova_compute[252672]: 2026-02-02 10:15:15.605 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:15 np0005604790 nova_compute[252672]: 2026-02-02 10:15:15.605 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:15:15 np0005604790 nova_compute[252672]: 2026-02-02 10:15:15.651 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.284 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.284 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.301 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.301 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:16 np0005604790 podman[281511]: 2026-02-02 10:15:16.330343924 +0000 UTC m=+0.080098797 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb  2 05:15:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:16.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:16 np0005604790 nova_compute[252672]: 2026-02-02 10:15:16.825 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:15:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:15:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:15:17
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.mgr', '.nfs', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta']
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:15:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:17.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:15:17 np0005604790 nova_compute[252672]: 2026-02-02 10:15:17.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:17 np0005604790 nova_compute[252672]: 2026-02-02 10:15:17.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.415758792 +0000 UTC m=+0.040979758 container create 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 05:15:17 np0005604790 systemd[1]: Started libpod-conmon-010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec.scope.
Feb  2 05:15:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.396922226 +0000 UTC m=+0.022143222 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.500746835 +0000 UTC m=+0.125967891 container init 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.509950622 +0000 UTC m=+0.135171588 container start 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.513831492 +0000 UTC m=+0.139052488 container attach 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:15:17 np0005604790 suspicious_rosalind[281688]: 167 167
Feb  2 05:15:17 np0005604790 systemd[1]: libpod-010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec.scope: Deactivated successfully.
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.517106857 +0000 UTC m=+0.142327853 container died 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:15:17 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1a588db44896ce7d1dc2931cde263b6456940fff56e187f363a6d2eb68c270fb-merged.mount: Deactivated successfully.
Feb  2 05:15:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:17 np0005604790 podman[281671]: 2026-02-02 10:15:17.567436655 +0000 UTC m=+0.192657661 container remove 010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 05:15:17 np0005604790 systemd[1]: libpod-conmon-010bf4a7dc1ff38b0b72cbdcc2cff839fec2af993501a24574155c3e147a12ec.scope: Deactivated successfully.
Feb  2 05:15:17 np0005604790 podman[281713]: 2026-02-02 10:15:17.750292162 +0000 UTC m=+0.049303403 container create 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb  2 05:15:17 np0005604790 systemd[1]: Started libpod-conmon-864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489.scope.
Feb  2 05:15:17 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:17 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 podman[281713]: 2026-02-02 10:15:17.818293696 +0000 UTC m=+0.117304967 container init 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 podman[281713]: 2026-02-02 10:15:17.727056192 +0000 UTC m=+0.026067473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:15:17 np0005604790 podman[281713]: 2026-02-02 10:15:17.824202758 +0000 UTC m=+0.123213999 container start 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:15:17 np0005604790 podman[281713]: 2026-02-02 10:15:17.827823852 +0000 UTC m=+0.126835113 container attach 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 05:15:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:15:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:18 np0005604790 zealous_poitras[281730]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:15:18 np0005604790 zealous_poitras[281730]: --> All data devices are unavailable
Feb  2 05:15:18 np0005604790 systemd[1]: libpod-864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489.scope: Deactivated successfully.
Feb  2 05:15:18 np0005604790 podman[281746]: 2026-02-02 10:15:18.248194716 +0000 UTC m=+0.036197745 container died 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Feb  2 05:15:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e513c0bf200ef035f966c51d995e854d3f635777eb2578f073964fd414b310d2-merged.mount: Deactivated successfully.
Feb  2 05:15:18 np0005604790 podman[281746]: 2026-02-02 10:15:18.295804504 +0000 UTC m=+0.083807533 container remove 864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:15:18 np0005604790 systemd[1]: libpod-conmon-864f4c893285f377d2f3e62608ed90fed14f641f23ba38a577ed8470251bf489.scope: Deactivated successfully.
Feb  2 05:15:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:18.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:15:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:15:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.861729582 +0000 UTC m=+0.059918257 container create f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:15:18 np0005604790 systemd[1]: Started libpod-conmon-f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde.scope.
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.832701263 +0000 UTC m=+0.030890028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:18 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.947828753 +0000 UTC m=+0.146017508 container init f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.95857426 +0000 UTC m=+0.156762935 container start f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.962850841 +0000 UTC m=+0.161039546 container attach f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb  2 05:15:18 np0005604790 lucid_nash[281871]: 167 167
Feb  2 05:15:18 np0005604790 systemd[1]: libpod-f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde.scope: Deactivated successfully.
Feb  2 05:15:18 np0005604790 podman[281854]: 2026-02-02 10:15:18.967728406 +0000 UTC m=+0.165917121 container died f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:15:18 np0005604790 systemd[1]: var-lib-containers-storage-overlay-38ee234d731a678f24794511d6348c9e36beaf8e913b81a4e192ae50f1c76bbc-merged.mount: Deactivated successfully.
Feb  2 05:15:19 np0005604790 podman[281854]: 2026-02-02 10:15:19.020331883 +0000 UTC m=+0.218520568 container remove f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:15:19 np0005604790 systemd[1]: libpod-conmon-f9be1246397c8f95825b79bd8fd8d1ad145f9398210bac1714fd5b2fd3e98dde.scope: Deactivated successfully.
Feb  2 05:15:19 np0005604790 podman[281898]: 2026-02-02 10:15:19.234064067 +0000 UTC m=+0.083342361 container create af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:15:19 np0005604790 systemd[1]: Started libpod-conmon-af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b.scope.
Feb  2 05:15:19 np0005604790 podman[281898]: 2026-02-02 10:15:19.194775483 +0000 UTC m=+0.044053847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:19 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b91efd3d446c65550ad9db88402f5291b44743c0b69ca200d84b2bf444581a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b91efd3d446c65550ad9db88402f5291b44743c0b69ca200d84b2bf444581a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b91efd3d446c65550ad9db88402f5291b44743c0b69ca200d84b2bf444581a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:19 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b91efd3d446c65550ad9db88402f5291b44743c0b69ca200d84b2bf444581a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:19 np0005604790 podman[281898]: 2026-02-02 10:15:19.339838865 +0000 UTC m=+0.189117159 container init af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:15:19 np0005604790 podman[281898]: 2026-02-02 10:15:19.350842039 +0000 UTC m=+0.200120353 container start af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:15:19 np0005604790 podman[281898]: 2026-02-02 10:15:19.368545596 +0000 UTC m=+0.217823890 container attach af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 05:15:19 np0005604790 charming_cray[281916]: {
Feb  2 05:15:19 np0005604790 charming_cray[281916]:    "1": [
Feb  2 05:15:19 np0005604790 charming_cray[281916]:        {
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "devices": [
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "/dev/loop3"
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            ],
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "lv_name": "ceph_lv0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "lv_size": "21470642176",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "name": "ceph_lv0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "tags": {
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.cluster_name": "ceph",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.crush_device_class": "",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.encrypted": "0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.osd_id": "1",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.type": "block",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.vdo": "0",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:                "ceph.with_tpm": "0"
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            },
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "type": "block",
Feb  2 05:15:19 np0005604790 charming_cray[281916]:            "vg_name": "ceph_vg0"
Feb  2 05:15:19 np0005604790 charming_cray[281916]:        }
Feb  2 05:15:19 np0005604790 charming_cray[281916]:    ]
Feb  2 05:15:19 np0005604790 charming_cray[281916]: }
Feb  2 05:15:19 np0005604790 systemd[1]: libpod-af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b.scope: Deactivated successfully.
Feb  2 05:15:19 np0005604790 podman[281925]: 2026-02-02 10:15:19.660592589 +0000 UTC m=+0.023641171 container died af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:15:19 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0b91efd3d446c65550ad9db88402f5291b44743c0b69ca200d84b2bf444581a5-merged.mount: Deactivated successfully.
Feb  2 05:15:19 np0005604790 podman[281925]: 2026-02-02 10:15:19.697470321 +0000 UTC m=+0.060518863 container remove af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 05:15:19 np0005604790 systemd[1]: libpod-conmon-af19e334e85911c13139c76126f16d2d123da274b808240796bdada737ddda7b.scope: Deactivated successfully.
Feb  2 05:15:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:20.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:20 np0005604790 nova_compute[252672]: 2026-02-02 10:15:20.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.311391776 +0000 UTC m=+0.060327317 container create 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb  2 05:15:20 np0005604790 systemd[1]: Started libpod-conmon-12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4.scope.
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.280430817 +0000 UTC m=+0.029366408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.425867399 +0000 UTC m=+0.174802950 container init 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.434911442 +0000 UTC m=+0.183846943 container start 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.438462184 +0000 UTC m=+0.187397735 container attach 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:15:20 np0005604790 laughing_zhukovsky[282044]: 167 167
Feb  2 05:15:20 np0005604790 systemd[1]: libpod-12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4.scope: Deactivated successfully.
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.442473157 +0000 UTC m=+0.191408698 container died 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:15:20 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e92bd9e281145cb1f0835534ad38c1f2a7e1987c8c149404010d3eec30c1f7fa-merged.mount: Deactivated successfully.
Feb  2 05:15:20 np0005604790 podman[282028]: 2026-02-02 10:15:20.492013335 +0000 UTC m=+0.240948836 container remove 12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_zhukovsky, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:15:20 np0005604790 systemd[1]: libpod-conmon-12648fbe6d8fb9103c098e475053f84b3f65f2b9cfd8dada8835bbb4c3c8f0a4.scope: Deactivated successfully.
Feb  2 05:15:20 np0005604790 nova_compute[252672]: 2026-02-02 10:15:20.655 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:20 np0005604790 podman[282070]: 2026-02-02 10:15:20.660967234 +0000 UTC m=+0.055095833 container create c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:15:20 np0005604790 systemd[1]: Started libpod-conmon-c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c.scope.
Feb  2 05:15:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:20 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:15:20 np0005604790 podman[282070]: 2026-02-02 10:15:20.633570807 +0000 UTC m=+0.027699446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:15:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53abaae354ac35aea859643a4191773e477bbec44934da077acd7bf69d7b079a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53abaae354ac35aea859643a4191773e477bbec44934da077acd7bf69d7b079a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53abaae354ac35aea859643a4191773e477bbec44934da077acd7bf69d7b079a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:20 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53abaae354ac35aea859643a4191773e477bbec44934da077acd7bf69d7b079a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:15:20 np0005604790 podman[282070]: 2026-02-02 10:15:20.763405226 +0000 UTC m=+0.157533825 container init c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:15:20 np0005604790 podman[282070]: 2026-02-02 10:15:20.773562728 +0000 UTC m=+0.167691327 container start c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:15:20 np0005604790 podman[282070]: 2026-02-02 10:15:20.78255677 +0000 UTC m=+0.176685419 container attach c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:15:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:21 np0005604790 lvm[282161]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:15:21 np0005604790 lvm[282161]: VG ceph_vg0 finished
Feb  2 05:15:21 np0005604790 charming_bouman[282087]: {}
Feb  2 05:15:21 np0005604790 systemd[1]: libpod-c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c.scope: Deactivated successfully.
Feb  2 05:15:21 np0005604790 podman[282070]: 2026-02-02 10:15:21.493769276 +0000 UTC m=+0.887897865 container died c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:15:21 np0005604790 systemd[1]: var-lib-containers-storage-overlay-53abaae354ac35aea859643a4191773e477bbec44934da077acd7bf69d7b079a-merged.mount: Deactivated successfully.
Feb  2 05:15:21 np0005604790 podman[282070]: 2026-02-02 10:15:21.61756909 +0000 UTC m=+1.011697659 container remove c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:15:21 np0005604790 systemd[1]: libpod-conmon-c57c6f5deacee1fa4275dc9f5224ca888ab1104f6f49c4260d64b87fdc1cf05c.scope: Deactivated successfully.
Feb  2 05:15:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:15:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:21 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:15:21 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:21 np0005604790 nova_compute[252672]: 2026-02-02 10:15:21.831 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:15:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:22.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:15:22 np0005604790 nova_compute[252672]: 2026-02-02 10:15:22.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:15:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:22 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:15:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:22 np0005604790 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb  2 05:15:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:22 np0005604790 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb  2 05:15:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:15:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:15:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:15:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:24] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Feb  2 05:15:25 np0005604790 nova_compute[252672]: 2026-02-02 10:15:25.660 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:26 np0005604790 nova_compute[252672]: 2026-02-02 10:15:26.833 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 op/s
Feb  2 05:15:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:27.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:28.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:30 np0005604790 nova_compute[252672]: 2026-02-02 10:15:30.663 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:31 np0005604790 nova_compute[252672]: 2026-02-02 10:15:31.839 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:15:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:15:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:15:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:15:35 np0005604790 nova_compute[252672]: 2026-02-02 10:15:35.711 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:36 np0005604790 nova_compute[252672]: 2026-02-02 10:15:36.880 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:37.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:38 np0005604790 podman[282248]: 2026-02-02 10:15:38.396231542 +0000 UTC m=+0.102488075 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb  2 05:15:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:38.849Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:15:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:38.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:15:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:38.850Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:15:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:15:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:40.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:15:40 np0005604790 nova_compute[252672]: 2026-02-02 10:15:40.754 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:41 np0005604790 nova_compute[252672]: 2026-02-02 10:15:41.938 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:42.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:44.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:44.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:15:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:15:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:15:45.388 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:15:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:15:45.388 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:15:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:15:45.389 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:15:45 np0005604790 nova_compute[252672]: 2026-02-02 10:15:45.805 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:46.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:46 np0005604790 systemd[1]: session-56.scope: Deactivated successfully.
Feb  2 05:15:46 np0005604790 systemd[1]: session-56.scope: Consumed 2min 48.171s CPU time, 867.5M memory peak, read 334.4M from disk, written 276.1M to disk.
Feb  2 05:15:46 np0005604790 systemd-logind[793]: Session 56 logged out. Waiting for processes to exit.
Feb  2 05:15:46 np0005604790 systemd-logind[793]: Removed session 56.
Feb  2 05:15:46 np0005604790 podman[282308]: 2026-02-02 10:15:46.677549252 +0000 UTC m=+0.064589647 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 05:15:46 np0005604790 systemd-logind[793]: New session 57 of user zuul.
Feb  2 05:15:46 np0005604790 systemd[1]: Started Session 57 of User zuul.
Feb  2 05:15:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:46.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:46 np0005604790 nova_compute[252672]: 2026-02-02 10:15:46.951 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:46 np0005604790 systemd[1]: session-57.scope: Deactivated successfully.
Feb  2 05:15:46 np0005604790 systemd-logind[793]: Session 57 logged out. Waiting for processes to exit.
Feb  2 05:15:46 np0005604790 systemd-logind[793]: Removed session 57.
Feb  2 05:15:47 np0005604790 systemd-logind[793]: New session 58 of user zuul.
Feb  2 05:15:47 np0005604790 systemd[1]: Started Session 58 of User zuul.
Feb  2 05:15:47 np0005604790 systemd[1]: session-58.scope: Deactivated successfully.
Feb  2 05:15:47 np0005604790 systemd-logind[793]: Session 58 logged out. Waiting for processes to exit.
Feb  2 05:15:47 np0005604790 systemd-logind[793]: Removed session 58.
Feb  2 05:15:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:47.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:15:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:15:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:15:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:48.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:48.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:48.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:50.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:50 np0005604790 nova_compute[252672]: 2026-02-02 10:15:50.808 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:51 np0005604790 nova_compute[252672]: 2026-02-02 10:15:51.955 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.793482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352793589, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2493, "num_deletes": 251, "total_data_size": 4404967, "memory_usage": 4476752, "flush_reason": "Manual Compaction"}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352847415, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4262073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29502, "largest_seqno": 31993, "table_properties": {"data_size": 4250121, "index_size": 7486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 29962, "raw_average_key_size": 22, "raw_value_size": 4224550, "raw_average_value_size": 3138, "num_data_blocks": 319, "num_entries": 1346, "num_filter_entries": 1346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027157, "oldest_key_time": 1770027157, "file_creation_time": 1770027352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 53990 microseconds, and 10208 cpu microseconds.
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.847480) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4262073 bytes OK
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.847548) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.850971) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.850988) EVENT_LOG_v1 {"time_micros": 1770027352850983, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.851008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4393757, prev total WAL file size 4393757, number of live WAL files 2.
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.851652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4162KB)], [65(11MB)]
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352851719, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 15907148, "oldest_snapshot_seqno": -1}
Feb  2 05:15:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6608 keys, 13905645 bytes, temperature: kUnknown
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352987588, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 13905645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13862514, "index_size": 25483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 169761, "raw_average_key_size": 25, "raw_value_size": 13744874, "raw_average_value_size": 2080, "num_data_blocks": 1023, "num_entries": 6608, "num_filter_entries": 6608, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.988003) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13905645 bytes
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.990587) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.0 rd, 102.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 11.1 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.0) write-amplify(3.3) OK, records in: 7129, records dropped: 521 output_compression: NoCompression
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.990622) EVENT_LOG_v1 {"time_micros": 1770027352990605, "job": 36, "event": "compaction_finished", "compaction_time_micros": 135994, "compaction_time_cpu_micros": 21421, "output_level": 6, "num_output_files": 1, "total_output_size": 13905645, "num_input_records": 7129, "num_output_records": 6608, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352991717, "job": 36, "event": "table_file_deletion", "file_number": 67}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027352994253, "job": 36, "event": "table_file_deletion", "file_number": 65}
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.851565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.994436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.994449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.994452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.994456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:52 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:15:52.994460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:15:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:54.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:15:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:15:54 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:15:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:15:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:15:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb  2 05:15:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532913621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Feb  2 05:15:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb  2 05:15:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532913621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Feb  2 05:15:55 np0005604790 nova_compute[252672]: 2026-02-02 10:15:55.812 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:15:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:15:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:15:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:15:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:15:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:56.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:56 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:15:56 np0005604790 nova_compute[252672]: 2026-02-02 10:15:56.956 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:15:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:57.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:15:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:15:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:15:58.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:15:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:15:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:15:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:15:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:15:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:15:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:15:58 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:00.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:00.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:00 np0005604790 nova_compute[252672]: 2026-02-02 10:16:00.815 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:00 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:01 np0005604790 nova_compute[252672]: 2026-02-02 10:16:01.960 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:02.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:16:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:16:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:02.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:02 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:04.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:16:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:16:04 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:05 np0005604790 nova_compute[252672]: 2026-02-02 10:16:05.818 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:06.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:06 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:06 np0005604790 nova_compute[252672]: 2026-02-02 10:16:06.962 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:07.200Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:16:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:07.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:16:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:08.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:09 np0005604790 podman[282431]: 2026-02-02 10:16:09.411386763 +0000 UTC m=+0.129519712 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb  2 05:16:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:10 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 05:16:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:10 np0005604790 nova_compute[252672]: 2026-02-02 10:16:10.822 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:11 np0005604790 nova_compute[252672]: 2026-02-02 10:16:11.963 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:12.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.325 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.326 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.326 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.326 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.326 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:16:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:16:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865253705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.769 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.933 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.935 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4495MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.935 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:16:13 np0005604790 nova_compute[252672]: 2026-02-02 10:16:13.936 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:16:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.249 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.250 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.271 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:16:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:16:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044454252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.750 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.755 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.771 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.773 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:16:14 np0005604790 nova_compute[252672]: 2026-02-02 10:16:14.774 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:16:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:16:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:16:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:15 np0005604790 nova_compute[252672]: 2026-02-02 10:16:15.774 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:15 np0005604790 nova_compute[252672]: 2026-02-02 10:16:15.825 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:16.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:16 np0005604790 nova_compute[252672]: 2026-02-02 10:16:16.992 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:16:17
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', '.nfs', 'vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data']
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:16:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:17.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:16:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:16:17 np0005604790 nova_compute[252672]: 2026-02-02 10:16:17.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:17 np0005604790 nova_compute[252672]: 2026-02-02 10:16:17.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:17 np0005604790 nova_compute[252672]: 2026-02-02 10:16:17.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:17 np0005604790 podman[282510]: 2026-02-02 10:16:17.335366496 +0000 UTC m=+0.052177067 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:16:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:16:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:16:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:18 np0005604790 nova_compute[252672]: 2026-02-02 10:16:18.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:18 np0005604790 nova_compute[252672]: 2026-02-02 10:16:18.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:16:18 np0005604790 nova_compute[252672]: 2026-02-02 10:16:18.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:16:18 np0005604790 nova_compute[252672]: 2026-02-02 10:16:18.316 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:16:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:18.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:18.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:16:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:18.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:16:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:19 np0005604790 nova_compute[252672]: 2026-02-02 10:16:19.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:19 np0005604790 nova_compute[252672]: 2026-02-02 10:16:19.284 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:20 np0005604790 nova_compute[252672]: 2026-02-02 10:16:20.828 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:22 np0005604790 nova_compute[252672]: 2026-02-02 10:16:22.033 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:22 np0005604790 nova_compute[252672]: 2026-02-02 10:16:22.279 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:22 np0005604790 podman[282683]: 2026-02-02 10:16:22.57450128 +0000 UTC m=+0.062714528 container exec 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:16:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:22 np0005604790 podman[282683]: 2026-02-02 10:16:22.68266125 +0000 UTC m=+0.170874488 container exec_died 79ef7165b184aa21ab9e464efe33891b6304e1ba848414549f299d1b301d6783 (image=quay.io/ceph/ceph:v19, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:22.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:23 np0005604790 podman[282818]: 2026-02-02 10:16:23.187033491 +0000 UTC m=+0.059952168 container exec 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 05:16:23 np0005604790 podman[282818]: 2026-02-02 10:16:23.19979543 +0000 UTC m=+0.072714027 container exec_died 19feecaa7fcd517b2bfb973fc8fcf623ad4b0956f16c53292d24fe12f4780190 (image=quay.io/ceph/haproxy:2.3, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-haproxy-rgw-default-compute-0-avekxu)
Feb  2 05:16:23 np0005604790 podman[282884]: 2026-02-02 10:16:23.414621092 +0000 UTC m=+0.066203359 container exec 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:23 np0005604790 podman[282884]: 2026-02-02 10:16:23.425858552 +0000 UTC m=+0.077440739 container exec_died 690ade5beb0aa03a99cf3b4b1da5291c57988034f4a9506f786da5a6fb824998 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:23 np0005604790 podman[282958]: 2026-02-02 10:16:23.741256867 +0000 UTC m=+0.074015430 container exec 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:16:23 np0005604790 podman[282958]: 2026-02-02 10:16:23.756053419 +0000 UTC m=+0.088811992 container exec_died 47ffd521b52a4e817e05a876d9da3b01cdfdfeae11aa3098649e39241d4ffff9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 05:16:24 np0005604790 podman[283044]: 2026-02-02 10:16:24.080893249 +0000 UTC m=+0.057135605 container exec 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Feb  2 05:16:24 np0005604790 podman[283044]: 2026-02-02 10:16:24.093931885 +0000 UTC m=+0.070174261 container exec_died 5f2bbf7994e4a92479e9d1b19dfd01c3876abee624a9d2019b778a31c38bb173 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-keepalived-nfs-cephfs-compute-0-pqolko, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, name=keepalived, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Feb  2 05:16:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:24 np0005604790 nova_compute[252672]: 2026-02-02 10:16:24.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:24 np0005604790 podman[283105]: 2026-02-02 10:16:24.307225747 +0000 UTC m=+0.067584244 container exec 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:24 np0005604790 podman[283105]: 2026-02-02 10:16:24.371176097 +0000 UTC m=+0.131534644 container exec_died 63a3970896ab11bd3cbece1b971a05452b0bd8a5b643f0eac52d5b3639ab19c4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:24 np0005604790 podman[283181]: 2026-02-02 10:16:24.571282568 +0000 UTC m=+0.043303267 container exec 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 05:16:24 np0005604790 podman[283181]: 2026-02-02 10:16:24.768574297 +0000 UTC m=+0.240595066 container exec_died 207575e5b32cec4e058e0ca4f48bad94c6e447fc9aa765a0aa8117f4604125b2 (image=quay.io/ceph/grafana:10.4.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Feb  2 05:16:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:24.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:16:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:16:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:25 np0005604790 podman[283275]: 2026-02-02 10:16:25.084969729 +0000 UTC m=+0.069822703 container exec 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:25 np0005604790 podman[283275]: 2026-02-02 10:16:25.135528293 +0000 UTC m=+0.120381247 container exec_died 214561532da1ee185d9ab0bf03f5b2d46320266c11cddad43265e8842ed9d667 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:25 np0005604790 nova_compute[252672]: 2026-02-02 10:16:25.832 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:16:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 556 B/s rd, 0 op/s
Feb  2 05:16:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:16:25 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:16:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:26.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:26 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.47287756 +0000 UTC m=+0.050163995 container create 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:26 np0005604790 systemd[1]: Started libpod-conmon-313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400.scope.
Feb  2 05:16:26 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.449810585 +0000 UTC m=+0.027097110 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.568047396 +0000 UTC m=+0.145333861 container init 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.572549382 +0000 UTC m=+0.149835817 container start 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.5794548 +0000 UTC m=+0.156741285 container attach 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:16:26 np0005604790 jolly_ishizaka[283510]: 167 167
Feb  2 05:16:26 np0005604790 systemd[1]: libpod-313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400.scope: Deactivated successfully.
Feb  2 05:16:26 np0005604790 conmon[283510]: conmon 313eee9d47a90421a170 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400.scope/container/memory.events
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.582597691 +0000 UTC m=+0.159884156 container died 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:26 np0005604790 systemd[1]: var-lib-containers-storage-overlay-fc322e3ab8c81706fcf54c84074a42fc0e3a4090c1e803cbb263d07000015cd9-merged.mount: Deactivated successfully.
Feb  2 05:16:26 np0005604790 podman[283494]: 2026-02-02 10:16:26.761627269 +0000 UTC m=+0.338913744 container remove 313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb  2 05:16:26 np0005604790 systemd[1]: libpod-conmon-313eee9d47a90421a170149a94a9dfb8a6f0214001be879e3b7af18e3a9e3400.scope: Deactivated successfully.
Feb  2 05:16:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:26 np0005604790 podman[283534]: 2026-02-02 10:16:26.921443272 +0000 UTC m=+0.054263851 container create ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb  2 05:16:26 np0005604790 systemd[1]: Started libpod-conmon-ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a.scope.
Feb  2 05:16:26 np0005604790 podman[283534]: 2026-02-02 10:16:26.897171985 +0000 UTC m=+0.029992674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:27 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:27 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:27 np0005604790 podman[283534]: 2026-02-02 10:16:27.034223471 +0000 UTC m=+0.167044090 container init ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:16:27 np0005604790 nova_compute[252672]: 2026-02-02 10:16:27.077 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:27 np0005604790 podman[283534]: 2026-02-02 10:16:27.080933125 +0000 UTC m=+0.213753704 container start ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:16:27 np0005604790 podman[283534]: 2026-02-02 10:16:27.08423718 +0000 UTC m=+0.217057809 container attach ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:16:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:27.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:27 np0005604790 upbeat_feistel[283551]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:16:27 np0005604790 upbeat_feistel[283551]: --> All data devices are unavailable
Feb  2 05:16:27 np0005604790 systemd[1]: libpod-ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a.scope: Deactivated successfully.
Feb  2 05:16:27 np0005604790 podman[283534]: 2026-02-02 10:16:27.439961557 +0000 UTC m=+0.572782196 container died ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:16:27 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b17cf464a888807569a3cc0a6b63bf7224a827d9ee4961dddbadd7e3dc3d56d9-merged.mount: Deactivated successfully.
Feb  2 05:16:27 np0005604790 podman[283534]: 2026-02-02 10:16:27.483619643 +0000 UTC m=+0.616440222 container remove ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:16:27 np0005604790 systemd[1]: libpod-conmon-ba5d626f428174d021b1edb8542bf92df4032882963bc733b0b2898a63d3913a.scope: Deactivated successfully.
Feb  2 05:16:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 339 B/s rd, 0 op/s
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.104187581 +0000 UTC m=+0.058989173 container create 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:16:28 np0005604790 systemd[1]: Started libpod-conmon-86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620.scope.
Feb  2 05:16:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.078885278 +0000 UTC m=+0.033686930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.200541136 +0000 UTC m=+0.155342768 container init 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.208024349 +0000 UTC m=+0.162825951 container start 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.212831663 +0000 UTC m=+0.167633265 container attach 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:16:28 np0005604790 upbeat_feynman[283691]: 167 167
Feb  2 05:16:28 np0005604790 systemd[1]: libpod-86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620.scope: Deactivated successfully.
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.216145508 +0000 UTC m=+0.170947100 container died 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 05:16:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-57c859a48f1dc3660c05476c7f16a7741fc4cd9318ced2fb72a338a8d3b7eb6e-merged.mount: Deactivated successfully.
Feb  2 05:16:28 np0005604790 podman[283675]: 2026-02-02 10:16:28.264650099 +0000 UTC m=+0.219451711 container remove 86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:16:28 np0005604790 systemd[1]: libpod-conmon-86b4910a67b0f1c4244902ee69f2caa3acb3a4f05a6d3456f575947c9ab5f620.scope: Deactivated successfully.
Feb  2 05:16:28 np0005604790 nova_compute[252672]: 2026-02-02 10:16:28.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.444225362 +0000 UTC m=+0.053590104 container create e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:16:28 np0005604790 systemd[1]: Started libpod-conmon-e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2.scope.
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.41934535 +0000 UTC m=+0.028710132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:28 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eaddcaae65f8bdec4406fe0954aa55566f7941faa97beae7899fa20e6672b97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eaddcaae65f8bdec4406fe0954aa55566f7941faa97beae7899fa20e6672b97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eaddcaae65f8bdec4406fe0954aa55566f7941faa97beae7899fa20e6672b97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:28 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eaddcaae65f8bdec4406fe0954aa55566f7941faa97beae7899fa20e6672b97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.559804933 +0000 UTC m=+0.169169655 container init e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.571793202 +0000 UTC m=+0.181157944 container start e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.575880468 +0000 UTC m=+0.185245190 container attach e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 05:16:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]: {
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:    "1": [
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:        {
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "devices": [
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "/dev/loop3"
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            ],
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "lv_name": "ceph_lv0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "lv_size": "21470642176",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "name": "ceph_lv0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "tags": {
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.cluster_name": "ceph",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.crush_device_class": "",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.encrypted": "0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.osd_id": "1",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.type": "block",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.vdo": "0",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:                "ceph.with_tpm": "0"
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            },
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "type": "block",
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:            "vg_name": "ceph_vg0"
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:        }
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]:    ]
Feb  2 05:16:28 np0005604790 interesting_poincare[283730]: }
Feb  2 05:16:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:28.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:16:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:28.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:16:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:16:28 np0005604790 systemd[1]: libpod-e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2.scope: Deactivated successfully.
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.904136575 +0000 UTC m=+0.513501317 container died e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:16:28 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6eaddcaae65f8bdec4406fe0954aa55566f7941faa97beae7899fa20e6672b97-merged.mount: Deactivated successfully.
Feb  2 05:16:28 np0005604790 podman[283714]: 2026-02-02 10:16:28.959915944 +0000 UTC m=+0.569280646 container remove e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:16:28 np0005604790 systemd[1]: libpod-conmon-e3a506131d20647e57e8eb8e4ee77131f3b9eba62412dbed14ebd2809f683ce2.scope: Deactivated successfully.
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.650853487 +0000 UTC m=+0.046236123 container create 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:16:29 np0005604790 systemd[1]: Started libpod-conmon-0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949.scope.
Feb  2 05:16:29 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.634430014 +0000 UTC m=+0.029812670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.737969775 +0000 UTC m=+0.133352491 container init 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.744666147 +0000 UTC m=+0.140048823 container start 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.749338718 +0000 UTC m=+0.144721444 container attach 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:16:29 np0005604790 flamboyant_hypatia[283860]: 167 167
Feb  2 05:16:29 np0005604790 systemd[1]: libpod-0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949.scope: Deactivated successfully.
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.75214499 +0000 UTC m=+0.147527666 container died 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb  2 05:16:29 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1ab0cb9d543df47b5aa78e36af4e257f0c26a46cc11cb0260898f362f454cbe4-merged.mount: Deactivated successfully.
Feb  2 05:16:29 np0005604790 podman[283842]: 2026-02-02 10:16:29.804637514 +0000 UTC m=+0.200020190 container remove 0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:16:29 np0005604790 systemd[1]: libpod-conmon-0367da9f6e42f70aeabf6470e2f6862809a9439b9e44b56075809464eda4f949.scope: Deactivated successfully.
Feb  2 05:16:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Feb  2 05:16:29 np0005604790 podman[283886]: 2026-02-02 10:16:29.981845836 +0000 UTC m=+0.063861159 container create b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:16:30 np0005604790 systemd[1]: Started libpod-conmon-b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0.scope.
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:29.950948049 +0000 UTC m=+0.032963422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:16:30 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:16:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c84e1bc771b73fe2d3b82a99d32437176fc6e85ee63e137d2881c8052f9e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c84e1bc771b73fe2d3b82a99d32437176fc6e85ee63e137d2881c8052f9e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c84e1bc771b73fe2d3b82a99d32437176fc6e85ee63e137d2881c8052f9e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:30 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c84e1bc771b73fe2d3b82a99d32437176fc6e85ee63e137d2881c8052f9e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:30.081290821 +0000 UTC m=+0.163306114 container init b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:30.096580715 +0000 UTC m=+0.178595988 container start b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:30.1013997 +0000 UTC m=+0.183415043 container attach b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:16:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:30 np0005604790 lvm[283976]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:16:30 np0005604790 lvm[283976]: VG ceph_vg0 finished
Feb  2 05:16:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:30 np0005604790 nova_compute[252672]: 2026-02-02 10:16:30.835 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:30 np0005604790 stupefied_lewin[283902]: {}
Feb  2 05:16:30 np0005604790 systemd[1]: libpod-b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0.scope: Deactivated successfully.
Feb  2 05:16:30 np0005604790 systemd[1]: libpod-b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0.scope: Consumed 1.292s CPU time.
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:30.867260416 +0000 UTC m=+0.949275709 container died b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:16:30 np0005604790 systemd[1]: var-lib-containers-storage-overlay-62c84e1bc771b73fe2d3b82a99d32437176fc6e85ee63e137d2881c8052f9e9e-merged.mount: Deactivated successfully.
Feb  2 05:16:30 np0005604790 podman[283886]: 2026-02-02 10:16:30.908386806 +0000 UTC m=+0.990402099 container remove b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 05:16:30 np0005604790 systemd[1]: libpod-conmon-b3869085225460d8979de1a880e0069c508cac374a792c007b018243447417c0.scope: Deactivated successfully.
Feb  2 05:16:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:16:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:30 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:16:30 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:31 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:31 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:16:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Feb  2 05:16:32 np0005604790 nova_compute[252672]: 2026-02-02 10:16:32.080 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:16:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:16:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 679 B/s rd, 0 op/s
Feb  2 05:16:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:34.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:16:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:16:35 np0005604790 nova_compute[252672]: 2026-02-02 10:16:35.863 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Feb  2 05:16:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:37 np0005604790 nova_compute[252672]: 2026-02-02 10:16:37.122 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:37.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:38.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:38.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:40 np0005604790 podman[284027]: 2026-02-02 10:16:40.398666752 +0000 UTC m=+0.118076187 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:16:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:40 np0005604790 nova_compute[252672]: 2026-02-02 10:16:40.865 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:42 np0005604790 nova_compute[252672]: 2026-02-02 10:16:42.174 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:42.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:16:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:44.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:16:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:16:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:16:45.390 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:16:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:16:45.390 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:16:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:16:45.390 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:16:45 np0005604790 nova_compute[252672]: 2026-02-02 10:16:45.869 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Feb  2 05:16:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:47 np0005604790 nova_compute[252672]: 2026-02-02 10:16:47.177 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:16:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:47.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:16:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Feb  2 05:16:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:48 np0005604790 podman[284087]: 2026-02-02 10:16:48.385663999 +0000 UTC m=+0.093853612 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb  2 05:16:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:48.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:48.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 0 op/s
Feb  2 05:16:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:50.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:50 np0005604790 nova_compute[252672]: 2026-02-02 10:16:50.872 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Feb  2 05:16:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:52 np0005604790 nova_compute[252672]: 2026-02-02 10:16:52.212 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:52.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 0 op/s
Feb  2 05:16:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:54.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:16:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:16:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:16:55 np0005604790 nova_compute[252672]: 2026-02-02 10:16:55.920 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s
Feb  2 05:16:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:16:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:16:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:16:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:16:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:16:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:56.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:16:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:16:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:57.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:57 np0005604790 nova_compute[252672]: 2026-02-02 10:16:57.263 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:16:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:16:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:16:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:16:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:16:58.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:16:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:16:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:16:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:16:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:16:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:16:58.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:16:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:00.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:00 np0005604790 nova_compute[252672]: 2026-02-02 10:17:00.973 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:02.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:17:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:17:02 np0005604790 nova_compute[252672]: 2026-02-02 10:17:02.302 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:04.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:04] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:17:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:04] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:17:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:05 np0005604790 nova_compute[252672]: 2026-02-02 10:17:05.976 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:06.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:07.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:07 np0005604790 nova_compute[252672]: 2026-02-02 10:17:07.332 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:08.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:10.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:11 np0005604790 nova_compute[252672]: 2026-02-02 10:17:11.020 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:11 np0005604790 podman[284153]: 2026-02-02 10:17:11.366576322 +0000 UTC m=+0.087487989 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true)
Feb  2 05:17:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:12.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:12 np0005604790 nova_compute[252672]: 2026-02-02 10:17:12.372 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:12.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.686 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.688 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.688 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.689 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:17:14 np0005604790 nova_compute[252672]: 2026-02-02 10:17:14.689 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:17:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:14.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:14] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:17:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:14] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Feb  2 05:17:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:17:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368655149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.628 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.771 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.772 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4543MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.773 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.773 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.846 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.847 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:17:15 np0005604790 nova_compute[252672]: 2026-02-02 10:17:15.864 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:17:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.063 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:17:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487047632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.295 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.300 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.345 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.351 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:17:16 np0005604790 nova_compute[252672]: 2026-02-02 10:17:16.352 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:17:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:16.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:17:17
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.rgw.root', 'vms', 'backups', '.mgr', 'default.rgw.log', '.nfs', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:17:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:17:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:17:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:17.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:17 np0005604790 nova_compute[252672]: 2026-02-02 10:17:17.429 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:17:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:17:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.353 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.354 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.354 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.392 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.392 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.392 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:18 np0005604790 nova_compute[252672]: 2026-02-02 10:17:18.392 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:17:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:18.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:18.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:19 np0005604790 nova_compute[252672]: 2026-02-02 10:17:19.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:19 np0005604790 podman[284232]: 2026-02-02 10:17:19.380988256 +0000 UTC m=+0.097108536 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:17:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:20 np0005604790 nova_compute[252672]: 2026-02-02 10:17:20.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:20.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:21 np0005604790 nova_compute[252672]: 2026-02-02 10:17:21.107 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:21 np0005604790 nova_compute[252672]: 2026-02-02 10:17:21.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:22 np0005604790 nova_compute[252672]: 2026-02-02 10:17:22.475 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:24 np0005604790 nova_compute[252672]: 2026-02-02 10:17:24.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:24 np0005604790 nova_compute[252672]: 2026-02-02 10:17:24.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:17:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:26 np0005604790 nova_compute[252672]: 2026-02-02 10:17:26.111 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:26.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:27.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:27 np0005604790 nova_compute[252672]: 2026-02-02 10:17:27.513 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:28.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:28.863Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:17:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:28.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:17:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:28.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:30.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:31 np0005604790 nova_compute[252672]: 2026-02-02 10:17:31.168 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:17:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:17:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:17:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.497735459 +0000 UTC m=+0.077947642 container create c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.445426849 +0000 UTC m=+0.025639092 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:32 np0005604790 nova_compute[252672]: 2026-02-02 10:17:32.559 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:32 np0005604790 systemd[1]: Started libpod-conmon-c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2.scope.
Feb  2 05:17:32 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.61956531 +0000 UTC m=+0.199777453 container init c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.624676242 +0000 UTC m=+0.204888385 container start c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.627737881 +0000 UTC m=+0.207950024 container attach c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:17:32 np0005604790 festive_austin[284478]: 167 167
Feb  2 05:17:32 np0005604790 systemd[1]: libpod-c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2.scope: Deactivated successfully.
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.629101976 +0000 UTC m=+0.209314119 container died c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:17:32 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a0c0779d6c1a874a863cb5c779b0069b91619495057fae344d95240625fded20-merged.mount: Deactivated successfully.
Feb  2 05:17:32 np0005604790 podman[284462]: 2026-02-02 10:17:32.658749851 +0000 UTC m=+0.238961994 container remove c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 05:17:32 np0005604790 systemd[1]: libpod-conmon-c4969b3a609459fa09d9f7a4b578b809c04eb7b624a36a3d56281317712ea9e2.scope: Deactivated successfully.
Feb  2 05:17:32 np0005604790 podman[284502]: 2026-02-02 10:17:32.815843604 +0000 UTC m=+0.058841089 container create 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:32 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:17:32 np0005604790 systemd[1]: Started libpod-conmon-7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5.scope.
Feb  2 05:17:32 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:32 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:32.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:32 np0005604790 podman[284502]: 2026-02-02 10:17:32.791030193 +0000 UTC m=+0.034027739 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:32 np0005604790 podman[284502]: 2026-02-02 10:17:32.906766999 +0000 UTC m=+0.149764494 container init 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:17:32 np0005604790 podman[284502]: 2026-02-02 10:17:32.925545463 +0000 UTC m=+0.168542938 container start 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:17:32 np0005604790 podman[284502]: 2026-02-02 10:17:32.929558237 +0000 UTC m=+0.172555742 container attach 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:17:33 np0005604790 lucid_curran[284519]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:17:33 np0005604790 lucid_curran[284519]: --> All data devices are unavailable
Feb  2 05:17:33 np0005604790 systemd[1]: libpod-7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5.scope: Deactivated successfully.
Feb  2 05:17:33 np0005604790 podman[284502]: 2026-02-02 10:17:33.22583219 +0000 UTC m=+0.468829655 container died 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:17:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-877e236e32132ad6f6f5eeb6449013bcbaf9dc3aeb6ada1a647414ac9db8c2f2-merged.mount: Deactivated successfully.
Feb  2 05:17:33 np0005604790 podman[284502]: 2026-02-02 10:17:33.276571488 +0000 UTC m=+0.519568973 container remove 7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:17:33 np0005604790 systemd[1]: libpod-conmon-7a6d34586eaa3d03305f00c248b2da7d395561ab1f186d4f31175a2952b270a5.scope: Deactivated successfully.
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.82866743 +0000 UTC m=+0.052403133 container create ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 05:17:33 np0005604790 systemd[1]: Started libpod-conmon-ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef.scope.
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.808326015 +0000 UTC m=+0.032061758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:33 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.936327837 +0000 UTC m=+0.160063530 container init ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.942009154 +0000 UTC m=+0.165744827 container start ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:17:33 np0005604790 musing_lamarr[284656]: 167 167
Feb  2 05:17:33 np0005604790 systemd[1]: libpod-ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef.scope: Deactivated successfully.
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.946032928 +0000 UTC m=+0.169768591 container attach ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.947345851 +0000 UTC m=+0.171081584 container died ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Feb  2 05:17:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Feb  2 05:17:33 np0005604790 systemd[1]: var-lib-containers-storage-overlay-245f561cf5d08e98c9d570d0a4a2dc89b7357c8b54cd1df466fd870f141e1b4f-merged.mount: Deactivated successfully.
Feb  2 05:17:33 np0005604790 podman[284639]: 2026-02-02 10:17:33.992600109 +0000 UTC m=+0.216335772 container remove ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_lamarr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:17:33 np0005604790 systemd[1]: libpod-conmon-ffa833a3d73200cf926c4f073c7962784482fa2a99d54afbc928c659e92abdef.scope: Deactivated successfully.
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.168364933 +0000 UTC m=+0.053524122 container create ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb  2 05:17:34 np0005604790 systemd[1]: Started libpod-conmon-ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01.scope.
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.139297843 +0000 UTC m=+0.024457082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:34 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b3052c331d60707348ca33fb0e22dccbedfe39c55a171a8fa7df6a0a2705c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b3052c331d60707348ca33fb0e22dccbedfe39c55a171a8fa7df6a0a2705c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b3052c331d60707348ca33fb0e22dccbedfe39c55a171a8fa7df6a0a2705c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:34 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b3052c331d60707348ca33fb0e22dccbedfe39c55a171a8fa7df6a0a2705c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.266478094 +0000 UTC m=+0.151637293 container init ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.278974286 +0000 UTC m=+0.164133445 container start ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.287966028 +0000 UTC m=+0.173125197 container attach ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]: {
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:    "1": [
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:        {
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "devices": [
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "/dev/loop3"
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            ],
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "lv_name": "ceph_lv0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "lv_size": "21470642176",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "name": "ceph_lv0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "tags": {
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.cluster_name": "ceph",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.crush_device_class": "",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.encrypted": "0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.osd_id": "1",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.type": "block",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.vdo": "0",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:                "ceph.with_tpm": "0"
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            },
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "type": "block",
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:            "vg_name": "ceph_vg0"
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:        }
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]:    ]
Feb  2 05:17:34 np0005604790 wizardly_lumiere[284698]: }
Feb  2 05:17:34 np0005604790 systemd[1]: libpod-ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01.scope: Deactivated successfully.
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.549048853 +0000 UTC m=+0.434208002 container died ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:17:34 np0005604790 systemd[1]: var-lib-containers-storage-overlay-e7b3052c331d60707348ca33fb0e22dccbedfe39c55a171a8fa7df6a0a2705c5-merged.mount: Deactivated successfully.
Feb  2 05:17:34 np0005604790 podman[284682]: 2026-02-02 10:17:34.622169209 +0000 UTC m=+0.507328378 container remove ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Feb  2 05:17:34 np0005604790 systemd[1]: libpod-conmon-ab7d28a83a57dfbbaf42ea1f4f37463e60d176805964e4cdcac6a42c149d5e01.scope: Deactivated successfully.
Feb  2 05:17:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:34] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:34] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:17:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.168579624 +0000 UTC m=+0.045911515 container create 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Feb  2 05:17:35 np0005604790 systemd[1]: Started libpod-conmon-56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438.scope.
Feb  2 05:17:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.150239341 +0000 UTC m=+0.027571282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.260054174 +0000 UTC m=+0.137386095 container init 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.266182522 +0000 UTC m=+0.143514413 container start 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:17:35 np0005604790 trusting_boyd[284830]: 167 167
Feb  2 05:17:35 np0005604790 systemd[1]: libpod-56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438.scope: Deactivated successfully.
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.271781906 +0000 UTC m=+0.149113817 container attach 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.272137965 +0000 UTC m=+0.149469876 container died 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb  2 05:17:35 np0005604790 systemd[1]: var-lib-containers-storage-overlay-4a35c274bc00bcd0c992a751c2e5305546b7b89fb3a586ce37dc653782f86631-merged.mount: Deactivated successfully.
Feb  2 05:17:35 np0005604790 podman[284814]: 2026-02-02 10:17:35.317480305 +0000 UTC m=+0.194812236 container remove 56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Feb  2 05:17:35 np0005604790 systemd[1]: libpod-conmon-56a7d1ed5b6ac31424f48b540c8bc6ba5a1bd9c6803de091544fc3d17882b438.scope: Deactivated successfully.
Feb  2 05:17:35 np0005604790 podman[284854]: 2026-02-02 10:17:35.498424213 +0000 UTC m=+0.063025567 container create d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb  2 05:17:35 np0005604790 systemd[1]: Started libpod-conmon-d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e.scope.
Feb  2 05:17:35 np0005604790 podman[284854]: 2026-02-02 10:17:35.472067073 +0000 UTC m=+0.036668487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:17:35 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:17:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd794ac15eab99dd7c991545371966e6058d7ecee973375e242c64bda69acdb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd794ac15eab99dd7c991545371966e6058d7ecee973375e242c64bda69acdb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd794ac15eab99dd7c991545371966e6058d7ecee973375e242c64bda69acdb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:35 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd794ac15eab99dd7c991545371966e6058d7ecee973375e242c64bda69acdb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:17:35 np0005604790 podman[284854]: 2026-02-02 10:17:35.589992495 +0000 UTC m=+0.154593849 container init d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:17:35 np0005604790 podman[284854]: 2026-02-02 10:17:35.595103086 +0000 UTC m=+0.159704410 container start d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb  2 05:17:35 np0005604790 podman[284854]: 2026-02-02 10:17:35.599330886 +0000 UTC m=+0.163932240 container attach d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Feb  2 05:17:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Feb  2 05:17:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:36 np0005604790 lvm[284946]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:17:36 np0005604790 lvm[284946]: VG ceph_vg0 finished
Feb  2 05:17:36 np0005604790 nova_compute[252672]: 2026-02-02 10:17:36.213 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:36 np0005604790 loving_moser[284870]: {}
Feb  2 05:17:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:36 np0005604790 systemd[1]: libpod-d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e.scope: Deactivated successfully.
Feb  2 05:17:36 np0005604790 podman[284854]: 2026-02-02 10:17:36.28726368 +0000 UTC m=+0.851865014 container died d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 05:17:36 np0005604790 systemd[1]: var-lib-containers-storage-overlay-dd794ac15eab99dd7c991545371966e6058d7ecee973375e242c64bda69acdb5-merged.mount: Deactivated successfully.
Feb  2 05:17:36 np0005604790 podman[284854]: 2026-02-02 10:17:36.328060873 +0000 UTC m=+0.892662207 container remove d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Feb  2 05:17:36 np0005604790 systemd[1]: libpod-conmon-d41f1fc6d5aa06631286529d0e2aa5eaca89c9832c308d82b1307e1dfce6458e.scope: Deactivated successfully.
Feb  2 05:17:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:17:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:17:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:37.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:17:37 np0005604790 nova_compute[252672]: 2026-02-02 10:17:37.615 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.644102) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457644165, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1448, "num_deletes": 505, "total_data_size": 2144071, "memory_usage": 2209296, "flush_reason": "Manual Compaction"}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457663165, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2072083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31995, "largest_seqno": 33441, "table_properties": {"data_size": 2065840, "index_size": 2998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 15397, "raw_average_key_size": 17, "raw_value_size": 2051204, "raw_average_value_size": 2357, "num_data_blocks": 132, "num_entries": 870, "num_filter_entries": 870, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027353, "oldest_key_time": 1770027353, "file_creation_time": 1770027457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 19110 microseconds, and 4338 cpu microseconds.
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.663219) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2072083 bytes OK
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.663238) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.665240) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.665254) EVENT_LOG_v1 {"time_micros": 1770027457665250, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.665274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2136730, prev total WAL file size 2136730, number of live WAL files 2.
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.665793) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323534' seq:72057594037927935, type:22 .. '6B7600353035' seq:0, type:0; will stop at (end)
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2023KB)], [68(13MB)]
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457665845, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 15977728, "oldest_snapshot_seqno": -1}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6453 keys, 14496840 bytes, temperature: kUnknown
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457820420, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 14496840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14453361, "index_size": 26217, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 169443, "raw_average_key_size": 26, "raw_value_size": 14336859, "raw_average_value_size": 2221, "num_data_blocks": 1037, "num_entries": 6453, "num_filter_entries": 6453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.820739) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 14496840 bytes
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.822531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.3 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.3 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(14.7) write-amplify(7.0) OK, records in: 7478, records dropped: 1025 output_compression: NoCompression
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.822548) EVENT_LOG_v1 {"time_micros": 1770027457822540, "job": 38, "event": "compaction_finished", "compaction_time_micros": 154714, "compaction_time_cpu_micros": 38503, "output_level": 6, "num_output_files": 1, "total_output_size": 14496840, "num_input_records": 7478, "num_output_records": 6453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457822755, "job": 38, "event": "table_file_deletion", "file_number": 70}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027457823742, "job": 38, "event": "table_file_deletion", "file_number": 68}
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.665704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.823856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.823864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.823867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.823870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:17:37.823872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:17:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Feb  2 05:17:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:38.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 769 B/s rd, 0 op/s
Feb  2 05:17:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:41 np0005604790 nova_compute[252672]: 2026-02-02 10:17:41.255 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 512 B/s rd, 0 op/s
Feb  2 05:17:42 np0005604790 podman[285018]: 2026-02-02 10:17:42.196951533 +0000 UTC m=+0.110366658 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Feb  2 05:17:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:42.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:42 np0005604790 nova_compute[252672]: 2026-02-02 10:17:42.649 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:44] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:44] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:17:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:44.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:17:45.391 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:17:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:17:45.392 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:17:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:17:45.392 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:17:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:17:45 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7396 writes, 33K keys, 7396 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7396 writes, 7396 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1638 writes, 7760 keys, 1638 commit groups, 1.0 writes per commit group, ingest: 11.97 MB, 0.02 MB/s#012Interval WAL: 1638 writes, 1638 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.2      0.78              0.14        19    0.041       0      0       0.0       0.0#012  L6      1/0   13.83 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    114.3     97.6      2.35              0.61        18    0.130    101K    10K       0.0       0.0#012 Sum      1/0   13.83 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     85.9     90.3      3.12              0.75        37    0.084    101K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.2     92.8     95.3      0.84              0.20        10    0.084     34K   3576       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    114.3     97.6      2.35              0.61        18    0.130    101K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.6      0.77              0.14        18    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.052, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 3.1 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630b94e5350#2 capacity: 304.00 MB usage: 24.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000212 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1486,23.47 MB,7.72007%) FilterBlock(38,299.23 KB,0.0961254%) IndexBlock(38,498.61 KB,0.160172%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb  2 05:17:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:46 np0005604790 nova_compute[252672]: 2026-02-02 10:17:46.285 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:46.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:17:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:17:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:47.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:17:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:47 np0005604790 nova_compute[252672]: 2026-02-02 10:17:47.651 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:48.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:17:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:48.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:17:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:17:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:50 np0005604790 podman[285053]: 2026-02-02 10:17:50.351569948 +0000 UTC m=+0.066182319 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:17:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:51 np0005604790 nova_compute[252672]: 2026-02-02 10:17:51.343 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:52 np0005604790 nova_compute[252672]: 2026-02-02 10:17:52.693 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:17:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:17:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:17:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:17:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:17:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:17:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:17:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:17:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:17:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:56 np0005604790 nova_compute[252672]: 2026-02-02 10:17:56.386 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:56.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:57.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:17:57 np0005604790 nova_compute[252672]: 2026-02-02 10:17:57.694 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:17:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:17:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:17:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:17:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:17:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:17:58.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:17:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:17:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:17:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:17:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:17:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:00.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:00.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:01 np0005604790 nova_compute[252672]: 2026-02-02 10:18:01.423 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:18:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:18:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:02.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:02 np0005604790 nova_compute[252672]: 2026-02-02 10:18:02.730 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:02.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000025s ======
Feb  2 05:18:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Feb  2 05:18:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:18:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:04] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:18:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:06.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:06 np0005604790 nova_compute[252672]: 2026-02-02 10:18:06.426 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:06.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:07.211Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:18:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:07.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:18:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:07 np0005604790 nova_compute[252672]: 2026-02-02 10:18:07.732 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:08.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:08.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:10.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:10.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:11 np0005604790 nova_compute[252672]: 2026-02-02 10:18:11.463 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:12 np0005604790 podman[285119]: 2026-02-02 10:18:12.407356534 +0000 UTC m=+0.122570033 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:18:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:12 np0005604790 nova_compute[252672]: 2026-02-02 10:18:12.762 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:12.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:14.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:14.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:15 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:15] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:18:15 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:15] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.058 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.059 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.059 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.059 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.059 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:18:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:18:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079793048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.537 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.704 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.705 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4521MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.705 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.706 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.867 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.868 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:18:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:15 np0005604790 nova_compute[252672]: 2026-02-02 10:18:15.981 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing inventories for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 05:18:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.007 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating ProviderTree inventory for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.007 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating inventory in ProviderTree for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.029 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing aggregate associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.063 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing trait associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.081 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.467 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:18:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816661589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.571 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.576 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.590 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.592 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:18:16 np0005604790 nova_compute[252672]: 2026-02-02 10:18:16.592 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:18:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:16.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:18:17
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.nfs', '.mgr', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:18:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:18:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:18:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:17.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:18:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:17 np0005604790 nova_compute[252672]: 2026-02-02 10:18:17.763 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:18:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:18 np0005604790 nova_compute[252672]: 2026-02-02 10:18:18.592 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:18 np0005604790 nova_compute[252672]: 2026-02-02 10:18:18.592 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:18 np0005604790 nova_compute[252672]: 2026-02-02 10:18:18.593 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:18:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:18.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:18.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:19 np0005604790 nova_compute[252672]: 2026-02-02 10:18:19.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:19 np0005604790 nova_compute[252672]: 2026-02-02 10:18:19.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:18:19 np0005604790 nova_compute[252672]: 2026-02-02 10:18:19.284 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:18:19 np0005604790 nova_compute[252672]: 2026-02-02 10:18:19.300 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:18:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:20 np0005604790 nova_compute[252672]: 2026-02-02 10:18:20.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:20 np0005604790 nova_compute[252672]: 2026-02-02 10:18:20.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:21 np0005604790 nova_compute[252672]: 2026-02-02 10:18:21.293 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:21 np0005604790 nova_compute[252672]: 2026-02-02 10:18:21.293 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:21 np0005604790 nova_compute[252672]: 2026-02-02 10:18:21.293 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 05:18:21 np0005604790 nova_compute[252672]: 2026-02-02 10:18:21.308 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 05:18:21 np0005604790 podman[285197]: 2026-02-02 10:18:21.349012358 +0000 UTC m=+0.063277160 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 05:18:21 np0005604790 nova_compute[252672]: 2026-02-02 10:18:21.508 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:22 np0005604790 nova_compute[252672]: 2026-02-02 10:18:22.297 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:22 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-crash-compute-0[79739]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Feb  2 05:18:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:22 np0005604790 nova_compute[252672]: 2026-02-02 10:18:22.767 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:22 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:22 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:22 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:22.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:24 np0005604790 nova_compute[252672]: 2026-02-02 10:18:24.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:24 np0005604790 nova_compute[252672]: 2026-02-02 10:18:24.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:18:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:24] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Feb  2 05:18:24 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:24 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:24 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:24.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:26 np0005604790 nova_compute[252672]: 2026-02-02 10:18:26.511 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:26 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:26 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:26 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:26.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:27.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:27 np0005604790 nova_compute[252672]: 2026-02-02 10:18:27.769 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:28 np0005604790 nova_compute[252672]: 2026-02-02 10:18:28.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:28.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:28 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:28 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:28 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:28.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:30 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:30 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:30 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:30.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:31 np0005604790 nova_compute[252672]: 2026-02-02 10:18:31.542 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:18:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:18:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:32 np0005604790 nova_compute[252672]: 2026-02-02 10:18:32.801 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:32 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:32 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:32 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:34 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:34 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:34 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:36 np0005604790 nova_compute[252672]: 2026-02-02 10:18:36.545 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:36 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:36 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:36 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:36.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:37.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:37 np0005604790 nova_compute[252672]: 2026-02-02 10:18:37.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:18:37 np0005604790 nova_compute[252672]: 2026-02-02 10:18:37.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:18:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:18:37 np0005604790 nova_compute[252672]: 2026-02-02 10:18:37.803 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:18:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:18:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:18:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:38 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.422951785 +0000 UTC m=+0.069408382 container create 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:18:38 np0005604790 systemd[1]: Started libpod-conmon-182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6.scope.
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.392515618 +0000 UTC m=+0.038972225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:38 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.511249537 +0000 UTC m=+0.157706174 container init 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.519566048 +0000 UTC m=+0.166022635 container start 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.523280797 +0000 UTC m=+0.169737444 container attach 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Feb  2 05:18:38 np0005604790 elastic_keller[285520]: 167 167
Feb  2 05:18:38 np0005604790 systemd[1]: libpod-182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6.scope: Deactivated successfully.
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.526949274 +0000 UTC m=+0.173405861 container died 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:18:38 np0005604790 systemd[1]: var-lib-containers-storage-overlay-074c429acc44b1d51d6d57fd2af0ad41f87773191fe8b70655f56cd725b503d2-merged.mount: Deactivated successfully.
Feb  2 05:18:38 np0005604790 podman[285504]: 2026-02-02 10:18:38.574420513 +0000 UTC m=+0.220877080 container remove 182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Feb  2 05:18:38 np0005604790 systemd[1]: libpod-conmon-182da3ca0e748835d062e9d722903ea7f3b885a5e0e6acc2cb9ff7d1b89478f6.scope: Deactivated successfully.
Feb  2 05:18:38 np0005604790 podman[285546]: 2026-02-02 10:18:38.772849047 +0000 UTC m=+0.063607239 container create b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:18:38 np0005604790 systemd[1]: Started libpod-conmon-b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81.scope.
Feb  2 05:18:38 np0005604790 podman[285546]: 2026-02-02 10:18:38.743364084 +0000 UTC m=+0.034122336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:38 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:38 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:38 np0005604790 podman[285546]: 2026-02-02 10:18:38.872163441 +0000 UTC m=+0.162921703 container init b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:18:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:38.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:38 np0005604790 podman[285546]: 2026-02-02 10:18:38.886969114 +0000 UTC m=+0.177727316 container start b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 05:18:38 np0005604790 podman[285546]: 2026-02-02 10:18:38.891516804 +0000 UTC m=+0.182275006 container attach b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:18:38 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:38 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:38 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:39 np0005604790 interesting_allen[285562]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:18:39 np0005604790 interesting_allen[285562]: --> All data devices are unavailable
Feb  2 05:18:39 np0005604790 systemd[1]: libpod-b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81.scope: Deactivated successfully.
Feb  2 05:18:39 np0005604790 podman[285546]: 2026-02-02 10:18:39.239994888 +0000 UTC m=+0.530753080 container died b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb  2 05:18:39 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d33a2a29e12d0059c8cb01b30cd176c41a20165a061acb949a7d68db2217c75f-merged.mount: Deactivated successfully.
Feb  2 05:18:39 np0005604790 podman[285546]: 2026-02-02 10:18:39.290693873 +0000 UTC m=+0.581452065 container remove b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Feb  2 05:18:39 np0005604790 systemd[1]: libpod-conmon-b5ec11cb0e29cf115b58605d14a7a693e5e177655b008f4035b99f7ef9eeaf81.scope: Deactivated successfully.
Feb  2 05:18:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 781 B/s rd, 0 op/s
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.890831692 +0000 UTC m=+0.051863327 container create 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Feb  2 05:18:39 np0005604790 systemd[1]: Started libpod-conmon-6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c.scope.
Feb  2 05:18:39 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.870652227 +0000 UTC m=+0.031683892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.971143583 +0000 UTC m=+0.132175298 container init 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.980718186 +0000 UTC m=+0.141749831 container start 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:18:39 np0005604790 zen_wilbur[285693]: 167 167
Feb  2 05:18:39 np0005604790 systemd[1]: libpod-6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c.scope: Deactivated successfully.
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.988096552 +0000 UTC m=+0.149128277 container attach 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:18:39 np0005604790 podman[285676]: 2026-02-02 10:18:39.988557245 +0000 UTC m=+0.149588950 container died 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Feb  2 05:18:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-43cdd85a9938eb9dcf088539e53bad1b88a9ce28a38f221016b33f592e54170d-merged.mount: Deactivated successfully.
Feb  2 05:18:40 np0005604790 podman[285676]: 2026-02-02 10:18:40.030703932 +0000 UTC m=+0.191735577 container remove 6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_wilbur, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 05:18:40 np0005604790 systemd[1]: libpod-conmon-6ffdbe9b3c44e928ca6bc491771a840954a81548b38eab80951499ac3092a24c.scope: Deactivated successfully.
Feb  2 05:18:40 np0005604790 podman[285719]: 2026-02-02 10:18:40.191551139 +0000 UTC m=+0.040101175 container create eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:18:40 np0005604790 systemd[1]: Started libpod-conmon-eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9.scope.
Feb  2 05:18:40 np0005604790 podman[285719]: 2026-02-02 10:18:40.173329496 +0000 UTC m=+0.021879552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:40 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f19ec0a4d880610027bac8acc4aef48584801449fda7a41db3cef4588c4d2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f19ec0a4d880610027bac8acc4aef48584801449fda7a41db3cef4588c4d2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f19ec0a4d880610027bac8acc4aef48584801449fda7a41db3cef4588c4d2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:40 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f19ec0a4d880610027bac8acc4aef48584801449fda7a41db3cef4588c4d2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:40 np0005604790 podman[285719]: 2026-02-02 10:18:40.294462919 +0000 UTC m=+0.143012935 container init eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 05:18:40 np0005604790 podman[285719]: 2026-02-02 10:18:40.303765666 +0000 UTC m=+0.152315692 container start eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:18:40 np0005604790 podman[285719]: 2026-02-02 10:18:40.306873398 +0000 UTC m=+0.155423564 container attach eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:18:40 np0005604790 strange_kepler[285735]: {
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:    "1": [
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:        {
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "devices": [
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "/dev/loop3"
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            ],
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "lv_name": "ceph_lv0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "lv_size": "21470642176",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "name": "ceph_lv0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "tags": {
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.cluster_name": "ceph",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.crush_device_class": "",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.encrypted": "0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.osd_id": "1",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.type": "block",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.vdo": "0",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:                "ceph.with_tpm": "0"
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            },
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "type": "block",
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:            "vg_name": "ceph_vg0"
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:        }
Feb  2 05:18:40 np0005604790 strange_kepler[285735]:    ]
Feb  2 05:18:40 np0005604790 strange_kepler[285735]: }
Feb  2 05:18:40 np0005604790 systemd[1]: libpod-eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9.scope: Deactivated successfully.
Feb  2 05:18:40 np0005604790 podman[285744]: 2026-02-02 10:18:40.647067761 +0000 UTC m=+0.028698522 container died eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:18:40 np0005604790 systemd[1]: var-lib-containers-storage-overlay-42f19ec0a4d880610027bac8acc4aef48584801449fda7a41db3cef4588c4d2e-merged.mount: Deactivated successfully.
Feb  2 05:18:40 np0005604790 podman[285744]: 2026-02-02 10:18:40.686474356 +0000 UTC m=+0.068105077 container remove eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_kepler, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb  2 05:18:40 np0005604790 systemd[1]: libpod-conmon-eb68812ea00aaf19d6c3acf03f898a0c4f3e0c5292091d8563a10e49b80be0f9.scope: Deactivated successfully.
Feb  2 05:18:40 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:40 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:40 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.261689205 +0000 UTC m=+0.037911567 container create b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb  2 05:18:41 np0005604790 systemd[1]: Started libpod-conmon-b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195.scope.
Feb  2 05:18:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.322924679 +0000 UTC m=+0.099147101 container init b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.327837889 +0000 UTC m=+0.104060291 container start b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.33087402 +0000 UTC m=+0.107096422 container attach b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:18:41 np0005604790 strange_yonath[285867]: 167 167
Feb  2 05:18:41 np0005604790 systemd[1]: libpod-b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195.scope: Deactivated successfully.
Feb  2 05:18:41 np0005604790 conmon[285867]: conmon b14888aeddc7b87e3c4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195.scope/container/memory.events
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.33501561 +0000 UTC m=+0.111237982 container died b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.249381838 +0000 UTC m=+0.025604220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:41 np0005604790 systemd[1]: var-lib-containers-storage-overlay-c61b59e87405185b3fe57cb53231c40de98774cb59c2b21afa6172f5fcb0a531-merged.mount: Deactivated successfully.
Feb  2 05:18:41 np0005604790 podman[285851]: 2026-02-02 10:18:41.372934776 +0000 UTC m=+0.149157158 container remove b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yonath, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb  2 05:18:41 np0005604790 systemd[1]: libpod-conmon-b14888aeddc7b87e3c4a383359d83a7f49682d618b79f6ba5a91e07ec7391195.scope: Deactivated successfully.
Feb  2 05:18:41 np0005604790 podman[285893]: 2026-02-02 10:18:41.544400164 +0000 UTC m=+0.070031879 container create dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:18:41 np0005604790 nova_compute[252672]: 2026-02-02 10:18:41.573 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:41 np0005604790 systemd[1]: Started libpod-conmon-dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b.scope.
Feb  2 05:18:41 np0005604790 podman[285893]: 2026-02-02 10:18:41.505401249 +0000 UTC m=+0.031033014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:18:41 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:18:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf542418960ecf48708fdcf7f88dbf220695d252c3c675e5e087501d41208bf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf542418960ecf48708fdcf7f88dbf220695d252c3c675e5e087501d41208bf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf542418960ecf48708fdcf7f88dbf220695d252c3c675e5e087501d41208bf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:41 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf542418960ecf48708fdcf7f88dbf220695d252c3c675e5e087501d41208bf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:18:41 np0005604790 podman[285893]: 2026-02-02 10:18:41.634238167 +0000 UTC m=+0.159869962 container init dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Feb  2 05:18:41 np0005604790 podman[285893]: 2026-02-02 10:18:41.642428934 +0000 UTC m=+0.168060689 container start dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:18:41 np0005604790 podman[285893]: 2026-02-02 10:18:41.655257434 +0000 UTC m=+0.180889489 container attach dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Feb  2 05:18:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Feb  2 05:18:42 np0005604790 lvm[285986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:18:42 np0005604790 lvm[285986]: VG ceph_vg0 finished
Feb  2 05:18:42 np0005604790 adoring_gagarin[285910]: {}
Feb  2 05:18:42 np0005604790 systemd[1]: libpod-dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b.scope: Deactivated successfully.
Feb  2 05:18:42 np0005604790 podman[285893]: 2026-02-02 10:18:42.286119919 +0000 UTC m=+0.811751634 container died dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:18:42 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bf542418960ecf48708fdcf7f88dbf220695d252c3c675e5e087501d41208bf9-merged.mount: Deactivated successfully.
Feb  2 05:18:42 np0005604790 podman[285893]: 2026-02-02 10:18:42.324311982 +0000 UTC m=+0.849943697 container remove dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_gagarin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 05:18:42 np0005604790 systemd[1]: libpod-conmon-dc3bc652ecc673aa83e7c70fe14327570a1c2c29714e7d1dad6bdbc1fc732c5b.scope: Deactivated successfully.
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:42 np0005604790 podman[286051]: 2026-02-02 10:18:42.665607635 +0000 UTC m=+0.152614249 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:18:42 np0005604790 nova_compute[252672]: 2026-02-02 10:18:42.805 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:42 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:18:42 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:42 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:42 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:42.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 781 B/s rd, 0 op/s
Feb  2 05:18:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:44 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:44 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:44 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:18:45.391 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:18:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:18:45.392 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:18:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:18:45.392 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:18:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Feb  2 05:18:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:46 np0005604790 nova_compute[252672]: 2026-02-02 10:18:46.610 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:46 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:46 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:46 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:18:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:18:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:47.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:18:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s
Feb  2 05:18:47 np0005604790 nova_compute[252672]: 2026-02-02 10:18:47.840 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:48 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:48 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:48 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:48.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:50 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:50 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:50 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:50.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:51 np0005604790 nova_compute[252672]: 2026-02-02 10:18:51.614 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:52 np0005604790 podman[286088]: 2026-02-02 10:18:52.328739396 +0000 UTC m=+0.052456822 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb  2 05:18:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:52 np0005604790 nova_compute[252672]: 2026-02-02 10:18:52.841 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:52 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:52 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:52 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:53.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:18:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:18:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:18:54 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:54 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:54 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:18:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:18:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:18:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:18:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:18:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:18:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:18:56 np0005604790 nova_compute[252672]: 2026-02-02 10:18:56.617 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:56 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:56 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:56 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:56.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:57.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:18:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:18:57 np0005604790 nova_compute[252672]: 2026-02-02 10:18:57.843 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:18:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:18:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:18:58 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:58 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:18:58 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:18:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:18:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:18:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:18:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:18:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:18:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:00 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:00 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:00 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:01 np0005604790 nova_compute[252672]: 2026-02-02 10:19:01.621 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:19:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:19:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:02 np0005604790 nova_compute[252672]: 2026-02-02 10:19:02.846 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:02 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:02 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:02 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:04 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:04 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:04 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:06 np0005604790 nova_compute[252672]: 2026-02-02 10:19:06.667 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:06 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:06 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:06 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:06.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:07.218Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:19:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:07.218Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:19:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:07.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:19:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:07 np0005604790 nova_compute[252672]: 2026-02-02 10:19:07.847 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:08 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:08 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:08 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:19:10 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2786 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1420 writes, 3779 keys, 1420 commit groups, 1.0 writes per commit group, ingest: 2.63 MB, 0.00 MB/s#012Interval WAL: 1420 writes, 630 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 05:19:10 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:10 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:10 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:11 np0005604790 nova_compute[252672]: 2026-02-02 10:19:11.671 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:12 np0005604790 nova_compute[252672]: 2026-02-02 10:19:12.850 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:12 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:12 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:12 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:13.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:13 np0005604790 podman[286153]: 2026-02-02 10:19:13.377339796 +0000 UTC m=+0.093676636 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb  2 05:19:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.298 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.379 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.380 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.380 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.381 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.381 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:19:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:19:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367606097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:19:14 np0005604790 nova_compute[252672]: 2026-02-02 10:19:14.840 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:19:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:14 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:14 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:14 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:15.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.221 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.222 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4502MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.223 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.223 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.403 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.404 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.428 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:19:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:19:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247780628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.922 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:19:15 np0005604790 nova_compute[252672]: 2026-02-02 10:19:15.929 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:19:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.051 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.053 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.053 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.161 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.373 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:16 np0005604790 nova_compute[252672]: 2026-02-02 10:19:16.710 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:16 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:16 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:16 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:19:17
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.nfs', 'images', 'default.rgw.control', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data']
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:19:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:19:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:19:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:17.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:19:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 nova_compute[252672]: 2026-02-02 10:19:17.905 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:19:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:19:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:18.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:18 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:18 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:18 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:18.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:20 np0005604790 nova_compute[252672]: 2026-02-02 10:19:20.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:20 np0005604790 nova_compute[252672]: 2026-02-02 10:19:20.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:19:20 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:20 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:20 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:20.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:21 np0005604790 nova_compute[252672]: 2026-02-02 10:19:21.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:21 np0005604790 nova_compute[252672]: 2026-02-02 10:19:21.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:19:21 np0005604790 nova_compute[252672]: 2026-02-02 10:19:21.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:19:21 np0005604790 nova_compute[252672]: 2026-02-02 10:19:21.304 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:19:21 np0005604790 nova_compute[252672]: 2026-02-02 10:19:21.714 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:22 np0005604790 nova_compute[252672]: 2026-02-02 10:19:22.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:22 np0005604790 podman[286258]: 2026-02-02 10:19:22.527152822 +0000 UTC m=+0.047464220 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb  2 05:19:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:22 np0005604790 nova_compute[252672]: 2026-02-02 10:19:22.906 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:22.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:23 np0005604790 nova_compute[252672]: 2026-02-02 10:19:23.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:24 np0005604790 nova_compute[252672]: 2026-02-02 10:19:24.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:24] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:24] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:25.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:25 np0005604790 nova_compute[252672]: 2026-02-02 10:19:25.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:25 np0005604790 nova_compute[252672]: 2026-02-02 10:19:25.630 252676 DEBUG oslo_concurrency.processutils [None req-c44df769-14b9-4ff4-8b94-fd29c4457052 41d09654a7d04d60a23411cf80fe1f98 823d3e7e313a44e9a50531e3fef22a1b - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:19:25 np0005604790 nova_compute[252672]: 2026-02-02 10:19:25.660 252676 DEBUG oslo_concurrency.processutils [None req-c44df769-14b9-4ff4-8b94-fd29c4457052 41d09654a7d04d60a23411cf80fe1f98 823d3e7e313a44e9a50531e3fef22a1b - - default default] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:19:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:26 np0005604790 nova_compute[252672]: 2026-02-02 10:19:26.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:19:26 np0005604790 nova_compute[252672]: 2026-02-02 10:19:26.716 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.936463) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027566936535, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1167, "num_deletes": 251, "total_data_size": 2110110, "memory_usage": 2152432, "flush_reason": "Manual Compaction"}
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027566964289, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2057326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33443, "largest_seqno": 34608, "table_properties": {"data_size": 2051765, "index_size": 2956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11972, "raw_average_key_size": 19, "raw_value_size": 2040593, "raw_average_value_size": 3395, "num_data_blocks": 130, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027458, "oldest_key_time": 1770027458, "file_creation_time": 1770027566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 27890 microseconds, and 5558 cpu microseconds.
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.964348) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2057326 bytes OK
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.964370) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.969070) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.969089) EVENT_LOG_v1 {"time_micros": 1770027566969083, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.969108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2104930, prev total WAL file size 2104930, number of live WAL files 2.
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.969770) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2009KB)], [71(13MB)]
Feb  2 05:19:26 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027566969820, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 16554166, "oldest_snapshot_seqno": -1}
Feb  2 05:19:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6538 keys, 14507692 bytes, temperature: kUnknown
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027567089348, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 14507692, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14463575, "index_size": 26661, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171903, "raw_average_key_size": 26, "raw_value_size": 14345193, "raw_average_value_size": 2194, "num_data_blocks": 1052, "num_entries": 6538, "num_filter_entries": 6538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.089721) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14507692 bytes
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.093123) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.3 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.8 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(15.1) write-amplify(7.1) OK, records in: 7054, records dropped: 516 output_compression: NoCompression
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.093154) EVENT_LOG_v1 {"time_micros": 1770027567093141, "job": 40, "event": "compaction_finished", "compaction_time_micros": 119695, "compaction_time_cpu_micros": 38790, "output_level": 6, "num_output_files": 1, "total_output_size": 14507692, "num_input_records": 7054, "num_output_records": 6538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027567093722, "job": 40, "event": "table_file_deletion", "file_number": 73}
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027567096029, "job": 40, "event": "table_file_deletion", "file_number": 71}
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:26.969667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.096099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.096105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.096109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.096112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:19:27.096114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:19:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:27.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:27 np0005604790 nova_compute[252672]: 2026-02-02 10:19:27.908 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:28.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:29.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:31.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:31 np0005604790 nova_compute[252672]: 2026-02-02 10:19:31.720 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:31 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:31.891 165364 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '66:4f:4d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '4a:a7:f3:61:65:15'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb  2 05:19:31 np0005604790 nova_compute[252672]: 2026-02-02 10:19:31.891 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:31 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:31.893 165364 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb  2 05:19:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:19:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:19:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:32 np0005604790 nova_compute[252672]: 2026-02-02 10:19:32.955 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:33.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:19:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:34] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:19:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:36 np0005604790 nova_compute[252672]: 2026-02-02 10:19:36.766 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:36 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:36.896 165364 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=031ca08d-19ea-44b4-b1bd-33ab088eb6a6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb  2 05:19:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:37.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:37 np0005604790 nova_compute[252672]: 2026-02-02 10:19:37.995 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:38.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:41 np0005604790 nova_compute[252672]: 2026-02-02 10:19:41.770 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:43 np0005604790 nova_compute[252672]: 2026-02-02 10:19:43.035 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:43.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:19:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:19:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:19:44 np0005604790 podman[286433]: 2026-02-02 10:19:44.029262324 +0000 UTC m=+0.105182321 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:44 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.37402331 +0000 UTC m=+0.082003586 container create 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 05:19:44 np0005604790 systemd[1]: Started libpod-conmon-9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd.scope.
Feb  2 05:19:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.348722079 +0000 UTC m=+0.056702395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.463223876 +0000 UTC m=+0.171204162 container init 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.47278119 +0000 UTC m=+0.180761456 container start 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.478503012 +0000 UTC m=+0.186483288 container attach 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb  2 05:19:44 np0005604790 eager_hamilton[286546]: 167 167
Feb  2 05:19:44 np0005604790 systemd[1]: libpod-9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd.scope: Deactivated successfully.
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.480442613 +0000 UTC m=+0.188422879 container died 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Feb  2 05:19:44 np0005604790 systemd[1]: var-lib-containers-storage-overlay-50493e7884520aa4d6fbfee50e44579c39ec6999b78b8d0c8afe81f62aefd4bb-merged.mount: Deactivated successfully.
Feb  2 05:19:44 np0005604790 podman[286529]: 2026-02-02 10:19:44.527004798 +0000 UTC m=+0.234985064 container remove 9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_hamilton, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Feb  2 05:19:44 np0005604790 systemd[1]: libpod-conmon-9423665ae41f45142253ee035136ee1398c8176bf6fdf690ed36c7d791f2e2fd.scope: Deactivated successfully.
Feb  2 05:19:44 np0005604790 podman[286570]: 2026-02-02 10:19:44.697670795 +0000 UTC m=+0.045236641 container create 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 05:19:44 np0005604790 systemd[1]: Started libpod-conmon-80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75.scope.
Feb  2 05:19:44 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:44 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:44 np0005604790 podman[286570]: 2026-02-02 10:19:44.678980659 +0000 UTC m=+0.026546535 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:44 np0005604790 podman[286570]: 2026-02-02 10:19:44.798303814 +0000 UTC m=+0.145869670 container init 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:19:44 np0005604790 podman[286570]: 2026-02-02 10:19:44.815599393 +0000 UTC m=+0.163165259 container start 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 05:19:44 np0005604790 podman[286570]: 2026-02-02 10:19:44.819358033 +0000 UTC m=+0.166923869 container attach 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:19:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:19:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:44] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Feb  2 05:19:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:45.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:45 np0005604790 determined_lamarr[286587]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:19:45 np0005604790 determined_lamarr[286587]: --> All data devices are unavailable
Feb  2 05:19:45 np0005604790 systemd[1]: libpod-80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75.scope: Deactivated successfully.
Feb  2 05:19:45 np0005604790 conmon[286587]: conmon 80c32b3f5360b0b3138f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75.scope/container/memory.events
Feb  2 05:19:45 np0005604790 podman[286570]: 2026-02-02 10:19:45.156008532 +0000 UTC m=+0.503574408 container died 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:19:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:45.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:45 np0005604790 systemd[1]: var-lib-containers-storage-overlay-04faa54cc2ff7034a1b6642d609c003c1fade3900b4d71736139cc256c7c6a18-merged.mount: Deactivated successfully.
Feb  2 05:19:45 np0005604790 podman[286570]: 2026-02-02 10:19:45.210676162 +0000 UTC m=+0.558242038 container remove 80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:19:45 np0005604790 systemd[1]: libpod-conmon-80c32b3f5360b0b3138f7122712acf9a6e96ded3cc2e0f34b9a98812a72eca75.scope: Deactivated successfully.
Feb  2 05:19:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:45.393 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:19:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:45.394 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:19:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:19:45.394 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:19:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.817850988 +0000 UTC m=+0.059538891 container create 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.792869755 +0000 UTC m=+0.034557708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:45 np0005604790 systemd[1]: Started libpod-conmon-188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e.scope.
Feb  2 05:19:45 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.975653543 +0000 UTC m=+0.217341496 container init 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.984768005 +0000 UTC m=+0.226455898 container start 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb  2 05:19:45 np0005604790 heuristic_satoshi[286722]: 167 167
Feb  2 05:19:45 np0005604790 systemd[1]: libpod-188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e.scope: Deactivated successfully.
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.991161485 +0000 UTC m=+0.232849438 container attach 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 05:19:45 np0005604790 podman[286706]: 2026-02-02 10:19:45.991616197 +0000 UTC m=+0.233304160 container died 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:19:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:46 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5bb59733f9e471434f16f1c57fb3671b1b0b9138c9a0638529ef14bd99a7582f-merged.mount: Deactivated successfully.
Feb  2 05:19:46 np0005604790 podman[286706]: 2026-02-02 10:19:46.040593916 +0000 UTC m=+0.282281799 container remove 188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:19:46 np0005604790 systemd[1]: libpod-conmon-188c4d1f6f94c349df46df280ed760ba1f17237cf649bf053a26f78d5731496e.scope: Deactivated successfully.
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.241043653 +0000 UTC m=+0.051728843 container create 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:19:46 np0005604790 systemd[1]: Started libpod-conmon-371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708.scope.
Feb  2 05:19:46 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab1b15b2697d44bd22071ac910ee8323f5eee3347f7744cd792fab77c39171c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.214208081 +0000 UTC m=+0.024893281 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab1b15b2697d44bd22071ac910ee8323f5eee3347f7744cd792fab77c39171c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab1b15b2697d44bd22071ac910ee8323f5eee3347f7744cd792fab77c39171c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:46 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab1b15b2697d44bd22071ac910ee8323f5eee3347f7744cd792fab77c39171c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.349635544 +0000 UTC m=+0.160320794 container init 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.359549847 +0000 UTC m=+0.170235047 container start 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.363572833 +0000 UTC m=+0.174257993 container attach 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb  2 05:19:46 np0005604790 sad_merkle[286763]: {
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:    "1": [
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:        {
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "devices": [
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "/dev/loop3"
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            ],
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "lv_name": "ceph_lv0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "lv_size": "21470642176",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "name": "ceph_lv0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "tags": {
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.cluster_name": "ceph",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.crush_device_class": "",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.encrypted": "0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.osd_id": "1",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.type": "block",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.vdo": "0",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:                "ceph.with_tpm": "0"
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            },
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "type": "block",
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:            "vg_name": "ceph_vg0"
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:        }
Feb  2 05:19:46 np0005604790 sad_merkle[286763]:    ]
Feb  2 05:19:46 np0005604790 sad_merkle[286763]: }
Feb  2 05:19:46 np0005604790 systemd[1]: libpod-371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708.scope: Deactivated successfully.
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.697426329 +0000 UTC m=+0.508111529 container died 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 05:19:46 np0005604790 systemd[1]: var-lib-containers-storage-overlay-1ab1b15b2697d44bd22071ac910ee8323f5eee3347f7744cd792fab77c39171c-merged.mount: Deactivated successfully.
Feb  2 05:19:46 np0005604790 podman[286746]: 2026-02-02 10:19:46.750299962 +0000 UTC m=+0.560985162 container remove 371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_merkle, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:19:46 np0005604790 systemd[1]: libpod-conmon-371ef1aac94fbdd516566fd71d1bbf6e6783cd172cda1d06b3c1d1d5e8ef6708.scope: Deactivated successfully.
Feb  2 05:19:46 np0005604790 nova_compute[252672]: 2026-02-02 10:19:46.814 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:47.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:19:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:19:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:47.222Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.346600339 +0000 UTC m=+0.049723410 container create 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:19:47 np0005604790 systemd[1]: Started libpod-conmon-5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f.scope.
Feb  2 05:19:47 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.321290418 +0000 UTC m=+0.024413539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.4442802 +0000 UTC m=+0.147403281 container init 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.452710054 +0000 UTC m=+0.155833135 container start 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:19:47 np0005604790 quirky_hofstadter[286891]: 167 167
Feb  2 05:19:47 np0005604790 systemd[1]: libpod-5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f.scope: Deactivated successfully.
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.489750816 +0000 UTC m=+0.192873937 container attach 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.490476655 +0000 UTC m=+0.193599726 container died 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:19:47 np0005604790 systemd[1]: var-lib-containers-storage-overlay-05965f701ef87120b1175ea6521d6109c2005517300b3417424a23522acf6345-merged.mount: Deactivated successfully.
Feb  2 05:19:47 np0005604790 podman[286875]: 2026-02-02 10:19:47.569406799 +0000 UTC m=+0.272529880 container remove 5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:19:47 np0005604790 systemd[1]: libpod-conmon-5c3d7786e80306d97b1c30e41f741310f52625e162a080db1e3ee3066da1d34f.scope: Deactivated successfully.
Feb  2 05:19:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:47 np0005604790 podman[286918]: 2026-02-02 10:19:47.745403208 +0000 UTC m=+0.056357476 container create bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:19:47 np0005604790 systemd[1]: Started libpod-conmon-bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4.scope.
Feb  2 05:19:47 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:19:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa036b6277c37b417dca99c9e3875742e02ce46e00bc8621bb092343197bc6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa036b6277c37b417dca99c9e3875742e02ce46e00bc8621bb092343197bc6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa036b6277c37b417dca99c9e3875742e02ce46e00bc8621bb092343197bc6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:47 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaa036b6277c37b417dca99c9e3875742e02ce46e00bc8621bb092343197bc6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:19:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:47 np0005604790 podman[286918]: 2026-02-02 10:19:47.815877887 +0000 UTC m=+0.126832195 container init bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325)
Feb  2 05:19:47 np0005604790 podman[286918]: 2026-02-02 10:19:47.726894777 +0000 UTC m=+0.037849065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:19:47 np0005604790 podman[286918]: 2026-02-02 10:19:47.82390637 +0000 UTC m=+0.134860678 container start bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:19:47 np0005604790 podman[286918]: 2026-02-02 10:19:47.830210677 +0000 UTC m=+0.141164955 container attach bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:19:48 np0005604790 nova_compute[252672]: 2026-02-02 10:19:48.071 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:48 np0005604790 lvm[287010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:19:48 np0005604790 lvm[287010]: VG ceph_vg0 finished
Feb  2 05:19:48 np0005604790 elegant_chatelet[286935]: {}
Feb  2 05:19:48 np0005604790 systemd[1]: libpod-bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4.scope: Deactivated successfully.
Feb  2 05:19:48 np0005604790 podman[286918]: 2026-02-02 10:19:48.551209423 +0000 UTC m=+0.862163741 container died bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:19:48 np0005604790 systemd[1]: libpod-bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4.scope: Consumed 1.046s CPU time.
Feb  2 05:19:48 np0005604790 systemd[1]: var-lib-containers-storage-overlay-eaa036b6277c37b417dca99c9e3875742e02ce46e00bc8621bb092343197bc6b-merged.mount: Deactivated successfully.
Feb  2 05:19:48 np0005604790 podman[286918]: 2026-02-02 10:19:48.607911836 +0000 UTC m=+0.918866144 container remove bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:19:48 np0005604790 systemd[1]: libpod-conmon-bca48fd462e529a2fc1179393beb32bc6a190239e3839d69a941f11da65b1de4.scope: Deactivated successfully.
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:48.880Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:19:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:48.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:48 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:19:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:49.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:49.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:51.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:51.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:51 np0005604790 nova_compute[252672]: 2026-02-02 10:19:51.817 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:53 np0005604790 nova_compute[252672]: 2026-02-02 10:19:53.073 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:53 np0005604790 podman[287054]: 2026-02-02 10:19:53.34273279 +0000 UTC m=+0.058435251 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb  2 05:19:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 0 op/s
Feb  2 05:19:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:19:54] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:19:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:55.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:19:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:19:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:19:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:19:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:19:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:19:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:19:56 np0005604790 nova_compute[252672]: 2026-02-02 10:19:56.864 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:57.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:57.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:19:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:19:58 np0005604790 nova_compute[252672]: 2026-02-02 10:19:58.121 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:19:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:19:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:19:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:19:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:19:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:19:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:19:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:19:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:19:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:19:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1 is in error state
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2 is in error state
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Feb  2 05:20:00 np0005604790 ceph-mon[74489]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Feb  2 05:20:00 np0005604790 ceph-mon[74489]:    daemon nfs.cephfs.0.0.compute-1.mhzhsx on compute-1 is in error state
Feb  2 05:20:00 np0005604790 ceph-mon[74489]:    daemon nfs.cephfs.1.0.compute-2.dciyfa on compute-2 is in error state
Feb  2 05:20:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:01.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:01.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:01 np0005604790 nova_compute[252672]: 2026-02-02 10:20:01.868 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:20:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:20:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:03.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:03 np0005604790 nova_compute[252672]: 2026-02-02 10:20:03.122 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:20:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:20:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:05.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:06 np0005604790 nova_compute[252672]: 2026-02-02 10:20:06.909 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:07.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:07.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:20:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:07.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:20:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:08 np0005604790 nova_compute[252672]: 2026-02-02 10:20:08.160 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:08.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:09.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:09.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:11.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:11 np0005604790 nova_compute[252672]: 2026-02-02 10:20:11.956 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:13.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:13 np0005604790 nova_compute[252672]: 2026-02-02 10:20:13.197 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.322 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.322 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.323 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.323 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.324 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:20:14 np0005604790 podman[287122]: 2026-02-02 10:20:14.427634863 +0000 UTC m=+0.137108368 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Feb  2 05:20:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:20:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747210279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.778 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:20:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:20:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.937 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.938 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4512MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.939 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:20:14 np0005604790 nova_compute[252672]: 2026-02-02 10:20:14.939 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.025 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.026 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.051 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:20:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:20:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454324369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.532 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.537 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.571 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.573 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:20:15 np0005604790 nova_compute[252672]: 2026-02-02 10:20:15.573 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:20:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:16 np0005604790 nova_compute[252672]: 2026-02-02 10:20:16.960 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:17.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:20:17
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'volumes', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.log', 'images']
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:20:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:20:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:20:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:17.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:20:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:20:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:20:18 np0005604790 nova_compute[252672]: 2026-02-02 10:20:18.256 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:18.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:19.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:19 np0005604790 nova_compute[252672]: 2026-02-02 10:20:19.573 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:21.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:21 np0005604790 nova_compute[252672]: 2026-02-02 10:20:21.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:21 np0005604790 nova_compute[252672]: 2026-02-02 10:20:21.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:20:21 np0005604790 nova_compute[252672]: 2026-02-02 10:20:21.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:20:21 np0005604790 nova_compute[252672]: 2026-02-02 10:20:21.308 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:20:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:21 np0005604790 nova_compute[252672]: 2026-02-02 10:20:21.962 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:22 np0005604790 nova_compute[252672]: 2026-02-02 10:20:22.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:22 np0005604790 nova_compute[252672]: 2026-02-02 10:20:22.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:20:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:23.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:23 np0005604790 nova_compute[252672]: 2026-02-02 10:20:23.256 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:23 np0005604790 nova_compute[252672]: 2026-02-02 10:20:23.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:20:24 np0005604790 nova_compute[252672]: 2026-02-02 10:20:24.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:24 np0005604790 podman[287227]: 2026-02-02 10:20:24.356830472 +0000 UTC m=+0.076355396 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:20:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:20:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:20:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:25 np0005604790 nova_compute[252672]: 2026-02-02 10:20:25.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Feb  2 05:20:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Feb  2 05:20:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Feb  2 05:20:25 np0005604790 radosgw[89254]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Feb  2 05:20:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:26 np0005604790 nova_compute[252672]: 2026-02-02 10:20:26.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:26 np0005604790 nova_compute[252672]: 2026-02-02 10:20:26.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:26 np0005604790 nova_compute[252672]: 2026-02-02 10:20:26.965 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:27.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:28 np0005604790 nova_compute[252672]: 2026-02-02 10:20:28.258 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:28.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb  2 05:20:30 np0005604790 nova_compute[252672]: 2026-02-02 10:20:30.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:20:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:31.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:31.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Feb  2 05:20:31 np0005604790 nova_compute[252672]: 2026-02-02 10:20:31.969 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:20:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:20:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:33.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:20:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:20:33 np0005604790 nova_compute[252672]: 2026-02-02 10:20:33.299 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 167 op/s
Feb  2 05:20:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:34] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Feb  2 05:20:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:34] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Feb  2 05:20:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:35.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:35.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Feb  2 05:20:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:37 np0005604790 nova_compute[252672]: 2026-02-02 10:20:37.023 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:37.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:37.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Feb  2 05:20:38 np0005604790 nova_compute[252672]: 2026-02-02 10:20:38.346 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:38.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:39.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:39.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 167 op/s
Feb  2 05:20:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:41.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:41.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Feb  2 05:20:42 np0005604790 nova_compute[252672]: 2026-02-02 10:20:42.071 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.711963) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642711998, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 935, "num_deletes": 251, "total_data_size": 1555122, "memory_usage": 1583464, "flush_reason": "Manual Compaction"}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642720111, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1012034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34609, "largest_seqno": 35543, "table_properties": {"data_size": 1008115, "index_size": 1571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10365, "raw_average_key_size": 21, "raw_value_size": 999773, "raw_average_value_size": 2036, "num_data_blocks": 66, "num_entries": 491, "num_filter_entries": 491, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027567, "oldest_key_time": 1770027567, "file_creation_time": 1770027642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8205 microseconds, and 2679 cpu microseconds.
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.720164) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1012034 bytes OK
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.720190) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.723328) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.723343) EVENT_LOG_v1 {"time_micros": 1770027642723338, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.723361) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1550714, prev total WAL file size 1550714, number of live WAL files 2.
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.723893) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323532' seq:0, type:0; will stop at (end)
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(988KB)], [74(13MB)]
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642723967, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15519726, "oldest_snapshot_seqno": -1}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6541 keys, 11914564 bytes, temperature: kUnknown
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642834663, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11914564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11874253, "index_size": 22855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172174, "raw_average_key_size": 26, "raw_value_size": 11759654, "raw_average_value_size": 1797, "num_data_blocks": 894, "num_entries": 6541, "num_filter_entries": 6541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027642, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.834893) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11914564 bytes
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.854052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.1 rd, 107.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(27.1) write-amplify(11.8) OK, records in: 7029, records dropped: 488 output_compression: NoCompression
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.854086) EVENT_LOG_v1 {"time_micros": 1770027642854073, "job": 42, "event": "compaction_finished", "compaction_time_micros": 110763, "compaction_time_cpu_micros": 22268, "output_level": 6, "num_output_files": 1, "total_output_size": 11914564, "num_input_records": 7029, "num_output_records": 6541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642854425, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027642857111, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.723763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.857202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.857208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.857211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.857214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:42 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:20:42.857216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:20:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:43.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:43 np0005604790 nova_compute[252672]: 2026-02-02 10:20:43.394 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Feb  2 05:20:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:44] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Feb  2 05:20:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:44] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Feb  2 05:20:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:20:45.394 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:20:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:20:45.395 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:20:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:20:45.396 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:20:45 np0005604790 podman[287293]: 2026-02-02 10:20:45.412331904 +0000 UTC m=+0.134990942 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb  2 05:20:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:47 np0005604790 nova_compute[252672]: 2026-02-02 10:20:47.074 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:20:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:20:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:47.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:20:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:20:48 np0005604790 nova_compute[252672]: 2026-02-02 10:20:48.396 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:48.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:49.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:20:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:49 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.296631201 +0000 UTC m=+0.046175505 container create 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:20:50 np0005604790 systemd[1]: Started libpod-conmon-62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea.scope.
Feb  2 05:20:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.270764645 +0000 UTC m=+0.020308999 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.375155154 +0000 UTC m=+0.124699488 container init 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.381676057 +0000 UTC m=+0.131220361 container start 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb  2 05:20:50 np0005604790 blissful_maxwell[287514]: 167 167
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.386002632 +0000 UTC m=+0.135547016 container attach 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:20:50 np0005604790 conmon[287514]: conmon 62739a92259d2c8e36a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea.scope/container/memory.events
Feb  2 05:20:50 np0005604790 systemd[1]: libpod-62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea.scope: Deactivated successfully.
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.387325687 +0000 UTC m=+0.136869991 container died 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Feb  2 05:20:50 np0005604790 systemd[1]: var-lib-containers-storage-overlay-11b280161fde73e87b6a694ca6b990eba943f2781c9526074f6fc6aff416e3f5-merged.mount: Deactivated successfully.
Feb  2 05:20:50 np0005604790 podman[287497]: 2026-02-02 10:20:50.432328991 +0000 UTC m=+0.181873295 container remove 62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_maxwell, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Feb  2 05:20:50 np0005604790 systemd[1]: libpod-conmon-62739a92259d2c8e36a5c7ab80c1d79272da26481ea5c26c97b4de2ef0925fea.scope: Deactivated successfully.
Feb  2 05:20:50 np0005604790 podman[287537]: 2026-02-02 10:20:50.582768502 +0000 UTC m=+0.050389488 container create c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Feb  2 05:20:50 np0005604790 systemd[1]: Started libpod-conmon-c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d.scope.
Feb  2 05:20:50 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:50 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:50 np0005604790 podman[287537]: 2026-02-02 10:20:50.564723183 +0000 UTC m=+0.032344189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:50 np0005604790 podman[287537]: 2026-02-02 10:20:50.669185054 +0000 UTC m=+0.136806080 container init c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Feb  2 05:20:50 np0005604790 podman[287537]: 2026-02-02 10:20:50.676260802 +0000 UTC m=+0.143881828 container start c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 05:20:50 np0005604790 podman[287537]: 2026-02-02 10:20:50.680434812 +0000 UTC m=+0.148055818 container attach c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 05:20:50 np0005604790 objective_panini[287554]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:20:50 np0005604790 objective_panini[287554]: --> All data devices are unavailable
Feb  2 05:20:51 np0005604790 systemd[1]: libpod-c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d.scope: Deactivated successfully.
Feb  2 05:20:51 np0005604790 podman[287537]: 2026-02-02 10:20:50.999916297 +0000 UTC m=+0.467537303 container died c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 05:20:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:51 np0005604790 systemd[1]: var-lib-containers-storage-overlay-2abc357b976aab56a5af1b10947466a9ccf57cf5d6d9a7b12e6ec71b921433ce-merged.mount: Deactivated successfully.
Feb  2 05:20:51 np0005604790 podman[287537]: 2026-02-02 10:20:51.092178054 +0000 UTC m=+0.559799060 container remove c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_panini, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Feb  2 05:20:51 np0005604790 systemd[1]: libpod-conmon-c565a0616c1a352097536bc053ab395054ea52a52abd94481a1c329d9792361d.scope: Deactivated successfully.
Feb  2 05:20:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:51.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.737005679 +0000 UTC m=+0.101541015 container create 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.672838477 +0000 UTC m=+0.037373893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:51 np0005604790 systemd[1]: Started libpod-conmon-9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65.scope.
Feb  2 05:20:51 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.865270111 +0000 UTC m=+0.229805467 container init 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.870980923 +0000 UTC m=+0.235516279 container start 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Feb  2 05:20:51 np0005604790 laughing_curran[287695]: 167 167
Feb  2 05:20:51 np0005604790 systemd[1]: libpod-9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65.scope: Deactivated successfully.
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.943591619 +0000 UTC m=+0.308126975 container attach 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:20:51 np0005604790 podman[287678]: 2026-02-02 10:20:51.944101482 +0000 UTC m=+0.308636818 container died 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:20:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ab6e0b79481e7c03a6f2da9e82fad8928f8525b0a7493469207541a0a75691de-merged.mount: Deactivated successfully.
Feb  2 05:20:52 np0005604790 podman[287678]: 2026-02-02 10:20:52.059265887 +0000 UTC m=+0.423801223 container remove 9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Feb  2 05:20:52 np0005604790 systemd[1]: libpod-conmon-9b18f4ffcc47f5090bd706c050f86b0d07fadbd79dce7a8cf47bc0c2f84f0f65.scope: Deactivated successfully.
Feb  2 05:20:52 np0005604790 nova_compute[252672]: 2026-02-02 10:20:52.120 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.228404524 +0000 UTC m=+0.048807706 container create bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Feb  2 05:20:52 np0005604790 systemd[1]: Started libpod-conmon-bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6.scope.
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.208792384 +0000 UTC m=+0.029195586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:52 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89f8d425f42fb0b96a052bb04fb6ddba79f730ce9dd5e81e0dc75189c768393/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89f8d425f42fb0b96a052bb04fb6ddba79f730ce9dd5e81e0dc75189c768393/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89f8d425f42fb0b96a052bb04fb6ddba79f730ce9dd5e81e0dc75189c768393/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:52 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89f8d425f42fb0b96a052bb04fb6ddba79f730ce9dd5e81e0dc75189c768393/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.328612402 +0000 UTC m=+0.149015624 container init bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.337863717 +0000 UTC m=+0.158266929 container start bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.342051639 +0000 UTC m=+0.162454861 container attach bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]: {
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:    "1": [
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:        {
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "devices": [
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "/dev/loop3"
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            ],
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "lv_name": "ceph_lv0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "lv_size": "21470642176",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "name": "ceph_lv0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "tags": {
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.cluster_name": "ceph",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.crush_device_class": "",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.encrypted": "0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.osd_id": "1",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.type": "block",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.vdo": "0",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:                "ceph.with_tpm": "0"
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            },
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "type": "block",
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:            "vg_name": "ceph_vg0"
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:        }
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]:    ]
Feb  2 05:20:52 np0005604790 inspiring_solomon[287738]: }
Feb  2 05:20:52 np0005604790 systemd[1]: libpod-bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6.scope: Deactivated successfully.
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.643897415 +0000 UTC m=+0.464300637 container died bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:20:52 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f89f8d425f42fb0b96a052bb04fb6ddba79f730ce9dd5e81e0dc75189c768393-merged.mount: Deactivated successfully.
Feb  2 05:20:52 np0005604790 podman[287721]: 2026-02-02 10:20:52.689926876 +0000 UTC m=+0.510330108 container remove bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Feb  2 05:20:52 np0005604790 systemd[1]: libpod-conmon-bde873a41cab750428721d06f6891a1c16b32499c22c814871e7f629233aebd6.scope: Deactivated successfully.
Feb  2 05:20:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:53.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:53.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.291580065 +0000 UTC m=+0.048794356 container create 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:20:53 np0005604790 systemd[1]: Started libpod-conmon-10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26.scope.
Feb  2 05:20:53 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.266342445 +0000 UTC m=+0.023556756 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.37282551 +0000 UTC m=+0.130039811 container init 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.382137177 +0000 UTC m=+0.139351478 container start 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.386575165 +0000 UTC m=+0.143789456 container attach 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:20:53 np0005604790 nostalgic_dirac[287872]: 167 167
Feb  2 05:20:53 np0005604790 systemd[1]: libpod-10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26.scope: Deactivated successfully.
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.390000775 +0000 UTC m=+0.147215136 container died 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:20:53 np0005604790 nova_compute[252672]: 2026-02-02 10:20:53.399 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:53 np0005604790 systemd[1]: var-lib-containers-storage-overlay-d105d182f0c9c2afa6660f9c6d18f972e5a8932c18db508667e8624618aaf6bd-merged.mount: Deactivated successfully.
Feb  2 05:20:53 np0005604790 podman[287856]: 2026-02-02 10:20:53.445844807 +0000 UTC m=+0.203059078 container remove 10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_dirac, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb  2 05:20:53 np0005604790 systemd[1]: libpod-conmon-10530dd19e157f35fee351a6321e0cab60bc322b2f4b939470adcf8b87c6cc26.scope: Deactivated successfully.
Feb  2 05:20:53 np0005604790 podman[287898]: 2026-02-02 10:20:53.600787297 +0000 UTC m=+0.049341720 container create 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:20:53 np0005604790 systemd[1]: Started libpod-conmon-1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50.scope.
Feb  2 05:20:53 np0005604790 podman[287898]: 2026-02-02 10:20:53.578572958 +0000 UTC m=+0.027127551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:20:53 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:20:53 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bdbaab341a5813c16f2a4bf7ec5100f1df4213516687ac5830278b89c0b34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:53 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bdbaab341a5813c16f2a4bf7ec5100f1df4213516687ac5830278b89c0b34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:53 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bdbaab341a5813c16f2a4bf7ec5100f1df4213516687ac5830278b89c0b34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:53 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0bdbaab341a5813c16f2a4bf7ec5100f1df4213516687ac5830278b89c0b34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:20:53 np0005604790 podman[287898]: 2026-02-02 10:20:53.710507807 +0000 UTC m=+0.159062230 container init 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:20:53 np0005604790 podman[287898]: 2026-02-02 10:20:53.719773123 +0000 UTC m=+0.168327516 container start 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:20:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Feb  2 05:20:53 np0005604790 podman[287898]: 2026-02-02 10:20:53.723939424 +0000 UTC m=+0.172493857 container attach 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Feb  2 05:20:54 np0005604790 lvm[287991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:20:54 np0005604790 lvm[287991]: VG ceph_vg0 finished
Feb  2 05:20:54 np0005604790 hopeful_darwin[287915]: {}
Feb  2 05:20:54 np0005604790 systemd[1]: libpod-1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50.scope: Deactivated successfully.
Feb  2 05:20:54 np0005604790 systemd[1]: libpod-1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50.scope: Consumed 1.112s CPU time.
Feb  2 05:20:54 np0005604790 podman[287898]: 2026-02-02 10:20:54.46118149 +0000 UTC m=+0.909735923 container died 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:20:54 np0005604790 systemd[1]: var-lib-containers-storage-overlay-bf0bdbaab341a5813c16f2a4bf7ec5100f1df4213516687ac5830278b89c0b34-merged.mount: Deactivated successfully.
Feb  2 05:20:54 np0005604790 podman[287898]: 2026-02-02 10:20:54.515956283 +0000 UTC m=+0.964510676 container remove 1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:20:54 np0005604790 podman[287995]: 2026-02-02 10:20:54.521455378 +0000 UTC m=+0.082031617 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:20:54 np0005604790 systemd[1]: libpod-conmon-1fa574596ade31fa79c4c93d48e7c8cb8138bbcb01791ff55310915f9c503b50.scope: Deactivated successfully.
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:54 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:20:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:54] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:20:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:20:54] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:20:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:20:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:55.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:20:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Feb  2 05:20:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:20:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:20:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:20:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:20:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:20:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:57.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:57 np0005604790 nova_compute[252672]: 2026-02-02 10:20:57.176 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:57.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:20:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:57.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:20:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 517 B/s rd, 0 op/s
Feb  2 05:20:58 np0005604790 nova_compute[252672]: 2026-02-02 10:20:58.402 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:20:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:20:58.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:20:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:20:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:20:59.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:20:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:20:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:20:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:20:59.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:20:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 775 B/s rd, 0 op/s
Feb  2 05:21:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:01.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:01.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:02 np0005604790 nova_compute[252672]: 2026-02-02 10:21:02.180 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:21:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:21:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:03.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:03 np0005604790 nova_compute[252672]: 2026-02-02 10:21:03.405 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:07 np0005604790 nova_compute[252672]: 2026-02-02 10:21:07.183 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:07.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:07.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:08 np0005604790 nova_compute[252672]: 2026-02-02 10:21:08.405 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:08.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:11.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:12 np0005604790 nova_compute[252672]: 2026-02-02 10:21:12.186 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:13.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:13.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:13 np0005604790 nova_compute[252672]: 2026-02-02 10:21:13.409 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.309 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.310 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.310 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.310 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.310 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:21:14 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:21:14 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499987093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.776 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:21:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.991 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.993 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4446MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.993 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:21:14 np0005604790 nova_compute[252672]: 2026-02-02 10:21:14.993 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.065 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.066 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.084 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:21:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:15.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:21:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462763208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.552 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.559 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.574 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.575 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:21:15 np0005604790 nova_compute[252672]: 2026-02-02 10:21:15.575 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:21:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:16 np0005604790 podman[288140]: 2026-02-02 10:21:16.405325457 +0000 UTC m=+0.107681898 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 05:21:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:17.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:17 np0005604790 nova_compute[252672]: 2026-02-02 10:21:17.188 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:21:17
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', '.nfs', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images']
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:21:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:21:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:21:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:17.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:21:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:21:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:21:18 np0005604790 nova_compute[252672]: 2026-02-02 10:21:18.411 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:18.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:19.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:19.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:20 np0005604790 nova_compute[252672]: 2026-02-02 10:21:20.576 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:21.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:21.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:22 np0005604790 nova_compute[252672]: 2026-02-02 10:21:22.192 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.284 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.303 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.303 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.304 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:21:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:23.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:23 np0005604790 nova_compute[252672]: 2026-02-02 10:21:23.415 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:25 np0005604790 nova_compute[252672]: 2026-02-02 10:21:25.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:25 np0005604790 nova_compute[252672]: 2026-02-02 10:21:25.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:25.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:25 np0005604790 podman[288200]: 2026-02-02 10:21:25.357855891 +0000 UTC m=+0.065487458 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb  2 05:21:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:26 np0005604790 nova_compute[252672]: 2026-02-02 10:21:26.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:27 np0005604790 nova_compute[252672]: 2026-02-02 10:21:27.195 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:27.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:21:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:27.230Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:21:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:27.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:21:27 np0005604790 nova_compute[252672]: 2026-02-02 10:21:27.277 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:27 np0005604790 nova_compute[252672]: 2026-02-02 10:21:27.280 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:21:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:28 np0005604790 nova_compute[252672]: 2026-02-02 10:21:28.419 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:28.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:29.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:32 np0005604790 nova_compute[252672]: 2026-02-02 10:21:32.198 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:21:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:21:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:33.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:33 np0005604790 nova_compute[252672]: 2026-02-02 10:21:33.420 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:21:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:34] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:21:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:35.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:37.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:37.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:37 np0005604790 nova_compute[252672]: 2026-02-02 10:21:37.250 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:38 np0005604790 nova_compute[252672]: 2026-02-02 10:21:38.424 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:38.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:21:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:38.894Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:21:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:38.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:21:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:39.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:39.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:41.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:41.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:42 np0005604790 nova_compute[252672]: 2026-02-02 10:21:42.253 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:43.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:43 np0005604790 nova_compute[252672]: 2026-02-02 10:21:43.462 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:21:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:44] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:21:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:21:45.396 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:21:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:21:45.396 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:21:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:21:45.397 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:21:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:21:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:47.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:21:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:21:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:21:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:47.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:47 np0005604790 nova_compute[252672]: 2026-02-02 10:21:47.311 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:21:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:47.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:47 np0005604790 podman[288266]: 2026-02-02 10:21:47.424372794 +0000 UTC m=+0.089630809 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  2 05:21:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:48 np0005604790 nova_compute[252672]: 2026-02-02 10:21:48.465 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:48.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:49.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:49.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:21:52 np0005604790 nova_compute[252672]: 2026-02-02 10:21:52.315 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:53.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:53 np0005604790 nova_compute[252672]: 2026-02-02 10:21:53.502 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:21:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:21:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:21:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:55.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:21:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:21:55 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:21:55 np0005604790 podman[288407]: 2026-02-02 10:21:55.778211224 +0000 UTC m=+0.066438023 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:21:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:21:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:21:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:21:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:21:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:21:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:21:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:21:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:21:56 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.170452018 +0000 UTC m=+0.058125982 container create de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Feb  2 05:21:56 np0005604790 systemd[1]: Started libpod-conmon-de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7.scope.
Feb  2 05:21:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.142115327 +0000 UTC m=+0.029789291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.25006167 +0000 UTC m=+0.137735614 container init de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.259779098 +0000 UTC m=+0.147453022 container start de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.263854076 +0000 UTC m=+0.151528030 container attach de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 05:21:56 np0005604790 goofy_jackson[288511]: 167 167
Feb  2 05:21:56 np0005604790 systemd[1]: libpod-de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7.scope: Deactivated successfully.
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.267130783 +0000 UTC m=+0.154804727 container died de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb  2 05:21:56 np0005604790 systemd[1]: var-lib-containers-storage-overlay-058d9b72d6e006e449296ade903d0cd8aa7eba35e3a503895f5b3d097d42dfcd-merged.mount: Deactivated successfully.
Feb  2 05:21:56 np0005604790 podman[288494]: 2026-02-02 10:21:56.316263126 +0000 UTC m=+0.203937050 container remove de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb  2 05:21:56 np0005604790 systemd[1]: libpod-conmon-de4a55da2b4404e5db3151de4c9e0f139f12cb9d0949da210cc442e4a9c7d4b7.scope: Deactivated successfully.
Feb  2 05:21:56 np0005604790 podman[288536]: 2026-02-02 10:21:56.489603384 +0000 UTC m=+0.064232475 container create 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:21:56 np0005604790 systemd[1]: Started libpod-conmon-6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857.scope.
Feb  2 05:21:56 np0005604790 podman[288536]: 2026-02-02 10:21:56.452080839 +0000 UTC m=+0.026709970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:56 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:56 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:56 np0005604790 podman[288536]: 2026-02-02 10:21:56.619301425 +0000 UTC m=+0.193930546 container init 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Feb  2 05:21:56 np0005604790 podman[288536]: 2026-02-02 10:21:56.625622112 +0000 UTC m=+0.200251203 container start 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Feb  2 05:21:56 np0005604790 podman[288536]: 2026-02-02 10:21:56.634539209 +0000 UTC m=+0.209168400 container attach 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:21:56 np0005604790 nice_shtern[288553]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:21:56 np0005604790 nice_shtern[288553]: --> All data devices are unavailable
Feb  2 05:21:57 np0005604790 systemd[1]: libpod-6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857.scope: Deactivated successfully.
Feb  2 05:21:57 np0005604790 podman[288536]: 2026-02-02 10:21:57.028367305 +0000 UTC m=+0.602996416 container died 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Feb  2 05:21:57 np0005604790 systemd[1]: var-lib-containers-storage-overlay-85c5781e0729e817b86d0f698eb8a6989975b497cf30e65d87a31626edb46874-merged.mount: Deactivated successfully.
Feb  2 05:21:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:57 np0005604790 podman[288536]: 2026-02-02 10:21:57.221046377 +0000 UTC m=+0.795675468 container remove 6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:21:57 np0005604790 systemd[1]: libpod-conmon-6ec6bea79a50c29c864b732cf021270bd3ff525ce170bdd80cc99b8ae4968857.scope: Deactivated successfully.
Feb  2 05:21:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:57.249Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:21:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:57.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:21:57 np0005604790 nova_compute[252672]: 2026-02-02 10:21:57.345 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Feb  2 05:21:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.88509775 +0000 UTC m=+0.045210850 container create eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Feb  2 05:21:57 np0005604790 systemd[1]: Started libpod-conmon-eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34.scope.
Feb  2 05:21:57 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.865733606 +0000 UTC m=+0.025846686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.967908237 +0000 UTC m=+0.128021307 container init eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.972974541 +0000 UTC m=+0.133087601 container start eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 05:21:57 np0005604790 great_hopper[288687]: 167 167
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.977897702 +0000 UTC m=+0.138010762 container attach eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb  2 05:21:57 np0005604790 systemd[1]: libpod-eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34.scope: Deactivated successfully.
Feb  2 05:21:57 np0005604790 podman[288671]: 2026-02-02 10:21:57.978690393 +0000 UTC m=+0.138803443 container died eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Feb  2 05:21:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ad48c0833ebe6d08c586daca7d003349ab7e02e5b7f9002a8a8bda9707f64ae1-merged.mount: Deactivated successfully.
Feb  2 05:21:58 np0005604790 podman[288671]: 2026-02-02 10:21:58.022263069 +0000 UTC m=+0.182376149 container remove eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_hopper, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb  2 05:21:58 np0005604790 systemd[1]: libpod-conmon-eb5bea885c6229457a81d306d97bc2f86a110d2bd8f6e73c832622d9e7f40b34.scope: Deactivated successfully.
Feb  2 05:21:58 np0005604790 podman[288710]: 2026-02-02 10:21:58.183762662 +0000 UTC m=+0.068687773 container create 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:21:58 np0005604790 podman[288710]: 2026-02-02 10:21:58.145740674 +0000 UTC m=+0.030665795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:58 np0005604790 systemd[1]: Started libpod-conmon-37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a.scope.
Feb  2 05:21:58 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7601e9b7b56d56e5036d9786547dfc1181c180883ec12ece32fa7139149e6995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7601e9b7b56d56e5036d9786547dfc1181c180883ec12ece32fa7139149e6995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7601e9b7b56d56e5036d9786547dfc1181c180883ec12ece32fa7139149e6995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:58 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7601e9b7b56d56e5036d9786547dfc1181c180883ec12ece32fa7139149e6995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:58 np0005604790 podman[288710]: 2026-02-02 10:21:58.401703964 +0000 UTC m=+0.286629095 container init 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:21:58 np0005604790 podman[288710]: 2026-02-02 10:21:58.411468913 +0000 UTC m=+0.296394004 container start 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:21:58 np0005604790 podman[288710]: 2026-02-02 10:21:58.415620623 +0000 UTC m=+0.300545754 container attach 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Feb  2 05:21:58 np0005604790 nova_compute[252672]: 2026-02-02 10:21:58.505 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]: {
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:    "1": [
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:        {
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "devices": [
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "/dev/loop3"
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            ],
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "lv_name": "ceph_lv0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "lv_size": "21470642176",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "name": "ceph_lv0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "tags": {
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.cluster_name": "ceph",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.crush_device_class": "",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.encrypted": "0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.osd_id": "1",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.type": "block",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.vdo": "0",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:                "ceph.with_tpm": "0"
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            },
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "type": "block",
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:            "vg_name": "ceph_vg0"
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:        }
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]:    ]
Feb  2 05:21:58 np0005604790 pensive_brattain[288728]: }
Feb  2 05:21:58 np0005604790 systemd[1]: libpod-37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a.scope: Deactivated successfully.
Feb  2 05:21:58 np0005604790 podman[288737]: 2026-02-02 10:21:58.767272151 +0000 UTC m=+0.026524835 container died 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb  2 05:21:58 np0005604790 systemd[1]: var-lib-containers-storage-overlay-7601e9b7b56d56e5036d9786547dfc1181c180883ec12ece32fa7139149e6995-merged.mount: Deactivated successfully.
Feb  2 05:21:58 np0005604790 podman[288737]: 2026-02-02 10:21:58.810351993 +0000 UTC m=+0.069604677 container remove 37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:21:58 np0005604790 systemd[1]: libpod-conmon-37313a286c71a3263e52ea95c3b37c2758455b0c32c58d24c285e8af8b81409a.scope: Deactivated successfully.
Feb  2 05:21:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:21:58.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:21:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:21:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:21:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:21:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:21:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:21:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:21:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.450080393 +0000 UTC m=+0.050788268 container create bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:21:59 np0005604790 systemd[1]: Started libpod-conmon-bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6.scope.
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.42774315 +0000 UTC m=+0.028451015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.546678055 +0000 UTC m=+0.147385930 container init bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.553450965 +0000 UTC m=+0.154158810 container start bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:21:59 np0005604790 optimistic_murdock[288858]: 167 167
Feb  2 05:21:59 np0005604790 systemd[1]: libpod-bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6.scope: Deactivated successfully.
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.561286673 +0000 UTC m=+0.161994538 container attach bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.561979091 +0000 UTC m=+0.162686946 container died bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Feb  2 05:21:59 np0005604790 systemd[1]: var-lib-containers-storage-overlay-6255dcc19604f8da11e7ac02ba4a2173576246b9fb093a9c0131f2eb7ae86bd0-merged.mount: Deactivated successfully.
Feb  2 05:21:59 np0005604790 podman[288841]: 2026-02-02 10:21:59.601216752 +0000 UTC m=+0.201924587 container remove bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb  2 05:21:59 np0005604790 systemd[1]: libpod-conmon-bed738d593b039eb56e3469ee521cfd09a6ae01d599ec10e0f682e0f7c2d77c6.scope: Deactivated successfully.
Feb  2 05:21:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 777 B/s rd, 0 op/s
Feb  2 05:21:59 np0005604790 podman[288884]: 2026-02-02 10:21:59.759352947 +0000 UTC m=+0.043726561 container create 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb  2 05:21:59 np0005604790 systemd[1]: Started libpod-conmon-40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71.scope.
Feb  2 05:21:59 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:21:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14eb467cf58f25cf2ad0b4e9cf838acedd98b6868eb919e607c3fa7938385be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14eb467cf58f25cf2ad0b4e9cf838acedd98b6868eb919e607c3fa7938385be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14eb467cf58f25cf2ad0b4e9cf838acedd98b6868eb919e607c3fa7938385be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:59 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14eb467cf58f25cf2ad0b4e9cf838acedd98b6868eb919e607c3fa7938385be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:21:59 np0005604790 podman[288884]: 2026-02-02 10:21:59.74026251 +0000 UTC m=+0.024636154 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:21:59 np0005604790 podman[288884]: 2026-02-02 10:21:59.854665675 +0000 UTC m=+0.139039329 container init 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Feb  2 05:21:59 np0005604790 podman[288884]: 2026-02-02 10:21:59.862656407 +0000 UTC m=+0.147030021 container start 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:21:59 np0005604790 podman[288884]: 2026-02-02 10:21:59.869724754 +0000 UTC m=+0.154098378 container attach 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:22:00 np0005604790 lvm[288976]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:22:00 np0005604790 lvm[288976]: VG ceph_vg0 finished
Feb  2 05:22:00 np0005604790 blissful_keldysh[288901]: {}
Feb  2 05:22:00 np0005604790 systemd[1]: libpod-40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71.scope: Deactivated successfully.
Feb  2 05:22:00 np0005604790 systemd[1]: libpod-40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71.scope: Consumed 1.177s CPU time.
Feb  2 05:22:00 np0005604790 podman[288980]: 2026-02-02 10:22:00.694433061 +0000 UTC m=+0.031827916 container died 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:22:00 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f14eb467cf58f25cf2ad0b4e9cf838acedd98b6868eb919e607c3fa7938385be-merged.mount: Deactivated successfully.
Feb  2 05:22:00 np0005604790 podman[288980]: 2026-02-02 10:22:00.760097062 +0000 UTC m=+0.097491917 container remove 40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:22:00 np0005604790 systemd[1]: libpod-conmon-40b15d59287dcbc1318666a03c1b510fba87b27f59e5f62e539516645098de71.scope: Deactivated successfully.
Feb  2 05:22:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:22:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:22:00 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:22:00 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:22:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Feb  2 05:22:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:22:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:22:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:22:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:22:02 np0005604790 nova_compute[252672]: 2026-02-02 10:22:02.380 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:03.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:03 np0005604790 nova_compute[252672]: 2026-02-02 10:22:03.542 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 777 B/s rd, 0 op/s
Feb  2 05:22:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:22:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:04] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:22:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 518 B/s rd, 0 op/s
Feb  2 05:22:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:07.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:07 np0005604790 nova_compute[252672]: 2026-02-02 10:22:07.416 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:08 np0005604790 nova_compute[252672]: 2026-02-02 10:22:08.547 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:08.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:09.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:11.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:12 np0005604790 nova_compute[252672]: 2026-02-02 10:22:12.420 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:13.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:13 np0005604790 nova_compute[252672]: 2026-02-02 10:22:13.549 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:22:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:14] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Feb  2 05:22:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.309 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.310 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.310 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.310 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.311 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:22:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:15.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:15 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:22:15 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106572320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.752 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.927 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.929 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4494MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.929 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:22:15 np0005604790 nova_compute[252672]: 2026-02-02 10:22:15.930 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:22:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.099 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.100 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.116 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:22:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:22:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2890586090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.583 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.589 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.614 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.616 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:22:16 np0005604790 nova_compute[252672]: 2026-02-02 10:22:16.617 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:22:17
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'volumes', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'backups', 'images']
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:22:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:22:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:22:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:17.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:17 np0005604790 nova_compute[252672]: 2026-02-02 10:22:17.459 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:22:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:22:18 np0005604790 podman[289105]: 2026-02-02 10:22:18.422480823 +0000 UTC m=+0.135732511 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb  2 05:22:18 np0005604790 nova_compute[252672]: 2026-02-02 10:22:18.552 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:18.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:19 np0005604790 ceph-mgr[74785]: [devicehealth INFO root] Check health
Feb  2 05:22:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:19.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:21.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:21.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:22 np0005604790 nova_compute[252672]: 2026-02-02 10:22:22.508 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:22 np0005604790 nova_compute[252672]: 2026-02-02 10:22:22.617 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:23.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:23.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:23 np0005604790 nova_compute[252672]: 2026-02-02 10:22:23.554 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:22:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:24] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:22:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:25.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.296 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.297 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:25 np0005604790 nova_compute[252672]: 2026-02-02 10:22:25.297 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:22:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:25.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:26 np0005604790 nova_compute[252672]: 2026-02-02 10:22:26.283 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:26 np0005604790 nova_compute[252672]: 2026-02-02 10:22:26.284 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:26 np0005604790 podman[289167]: 2026-02-02 10:22:26.360380521 +0000 UTC m=+0.072137234 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb  2 05:22:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:27.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:27.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:27.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:27 np0005604790 nova_compute[252672]: 2026-02-02 10:22:27.552 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:28 np0005604790 nova_compute[252672]: 2026-02-02 10:22:28.279 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:28 np0005604790 nova_compute[252672]: 2026-02-02 10:22:28.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:28 np0005604790 nova_compute[252672]: 2026-02-02 10:22:28.593 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:28.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:29.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:29 np0005604790 nova_compute[252672]: 2026-02-02 10:22:29.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:31.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:22:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:22:32 np0005604790 nova_compute[252672]: 2026-02-02 10:22:32.557 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:33 np0005604790 nova_compute[252672]: 2026-02-02 10:22:33.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:22:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:33 np0005604790 nova_compute[252672]: 2026-02-02 10:22:33.595 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:34] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Feb  2 05:22:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:34] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Feb  2 05:22:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:35.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:37.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:37 np0005604790 nova_compute[252672]: 2026-02-02 10:22:37.595 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:38 np0005604790 nova_compute[252672]: 2026-02-02 10:22:38.597 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:38.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:39.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=cleanup t=2026-02-02T10:22:39.480840575Z level=info msg="Completed cleanup jobs" duration=12.581824ms
Feb  2 05:22:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=grafana.update.checker t=2026-02-02T10:22:39.595426813Z level=info msg="Update check succeeded" duration=53.270202ms
Feb  2 05:22:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-grafana-compute-0[104673]: logger=plugins.update.checker t=2026-02-02T10:22:39.598359081Z level=info msg="Update check succeeded" duration=53.326674ms
Feb  2 05:22:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:42 np0005604790 nova_compute[252672]: 2026-02-02 10:22:42.635 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:43 np0005604790 nova_compute[252672]: 2026-02-02 10:22:43.599 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:44] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Feb  2 05:22:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:44] "GET /metrics HTTP/1.1" 200 48448 "" "Prometheus/2.51.0"
Feb  2 05:22:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:22:45.397 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:22:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:22:45.398 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:22:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:22:45.398 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:22:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:45.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:22:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:22:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:47.256Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:22:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:47 np0005604790 nova_compute[252672]: 2026-02-02 10:22:47.638 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:48 np0005604790 nova_compute[252672]: 2026-02-02 10:22:48.633 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:48.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:22:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:48.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:22:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:49 np0005604790 podman[289234]: 2026-02-02 10:22:49.385218627 +0000 UTC m=+0.093921761 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb  2 05:22:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:51.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:52 np0005604790 nova_compute[252672]: 2026-02-02 10:22:52.640 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000053s ======
Feb  2 05:22:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:53.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Feb  2 05:22:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:53.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:53 np0005604790 nova_compute[252672]: 2026-02-02 10:22:53.636 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:22:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:22:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:22:54] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:22:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:55.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:22:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:22:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:22:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:22:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:22:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:57.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:22:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:22:57 np0005604790 podman[289269]: 2026-02-02 10:22:57.378766455 +0000 UTC m=+0.090533311 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb  2 05:22:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:22:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:22:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:22:57 np0005604790 nova_compute[252672]: 2026-02-02 10:22:57.645 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:22:58 np0005604790 nova_compute[252672]: 2026-02-02 10:22:58.638 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.824285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778824336, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1422, "num_deletes": 251, "total_data_size": 2577530, "memory_usage": 2611488, "flush_reason": "Manual Compaction"}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778849951, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2515296, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35544, "largest_seqno": 36965, "table_properties": {"data_size": 2508757, "index_size": 3674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13899, "raw_average_key_size": 19, "raw_value_size": 2495610, "raw_average_value_size": 3590, "num_data_blocks": 160, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027643, "oldest_key_time": 1770027643, "file_creation_time": 1770027778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 25755 microseconds, and 7192 cpu microseconds.
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.850033) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2515296 bytes OK
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.850064) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.853138) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.853165) EVENT_LOG_v1 {"time_micros": 1770027778853157, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.853190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2571465, prev total WAL file size 2571465, number of live WAL files 2.
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.854275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2456KB)], [77(11MB)]
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778854356, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14429860, "oldest_snapshot_seqno": -1}
Feb  2 05:22:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:22:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6720 keys, 12200888 bytes, temperature: kUnknown
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778957807, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12200888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12159648, "index_size": 23336, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 176561, "raw_average_key_size": 26, "raw_value_size": 12042089, "raw_average_value_size": 1791, "num_data_blocks": 910, "num_entries": 6720, "num_filter_entries": 6720, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.958097) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12200888 bytes
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.959487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.4 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 11.4 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 7236, records dropped: 516 output_compression: NoCompression
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.959542) EVENT_LOG_v1 {"time_micros": 1770027778959499, "job": 44, "event": "compaction_finished", "compaction_time_micros": 103539, "compaction_time_cpu_micros": 22939, "output_level": 6, "num_output_files": 1, "total_output_size": 12200888, "num_input_records": 7236, "num_output_records": 6720, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778959970, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027778961659, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.854138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.961692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.961697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.961699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.961701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:58 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:22:58.961704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:22:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:22:59.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:22:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:22:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:22:59.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:22:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:01.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:23:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:23:01 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.558365194 +0000 UTC m=+0.060150516 container create db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.526463128 +0000 UTC m=+0.028248480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:02 np0005604790 nova_compute[252672]: 2026-02-02 10:23:02.648 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:02 np0005604790 systemd[1]: Started libpod-conmon-db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1.scope.
Feb  2 05:23:02 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.703514722 +0000 UTC m=+0.205300084 container init db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.712893511 +0000 UTC m=+0.214678863 container start db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:23:02 np0005604790 priceless_fermi[289488]: 167 167
Feb  2 05:23:02 np0005604790 systemd[1]: libpod-db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1.scope: Deactivated successfully.
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.722049604 +0000 UTC m=+0.223834956 container attach db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.722599198 +0000 UTC m=+0.224384580 container died db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:23:02 np0005604790 systemd[1]: var-lib-containers-storage-overlay-641228a930b34755f78a62115627f044d6a35d707c7964d88f7fde55873fb3e5-merged.mount: Deactivated successfully.
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:02 np0005604790 podman[289471]: 2026-02-02 10:23:02.818893951 +0000 UTC m=+0.320679273 container remove db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 05:23:02 np0005604790 systemd[1]: libpod-conmon-db996a50a4815e4365ddb8e8046e31798c9440a5188b10092d619ceadf509ac1.scope: Deactivated successfully.
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:02 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:23:02 np0005604790 podman[289514]: 2026-02-02 10:23:02.988095678 +0000 UTC m=+0.065047666 container create af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Feb  2 05:23:03 np0005604790 systemd[1]: Started libpod-conmon-af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a.scope.
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:02.94668277 +0000 UTC m=+0.023634848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:03 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:03 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:03.091461088 +0000 UTC m=+0.168413126 container init af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:03.099397768 +0000 UTC m=+0.176349756 container start af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:03.1122955 +0000 UTC m=+0.189247538 container attach af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb  2 05:23:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:03 np0005604790 lucid_clarke[289530]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:23:03 np0005604790 lucid_clarke[289530]: --> All data devices are unavailable
Feb  2 05:23:03 np0005604790 systemd[1]: libpod-af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a.scope: Deactivated successfully.
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:03.434168725 +0000 UTC m=+0.511120713 container died af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Feb  2 05:23:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:03 np0005604790 systemd[1]: var-lib-containers-storage-overlay-a5540d44ad474953c8e448ca5f26462e24bec951c55580a9a1d8be1d1daf4b4e-merged.mount: Deactivated successfully.
Feb  2 05:23:03 np0005604790 podman[289514]: 2026-02-02 10:23:03.538145302 +0000 UTC m=+0.615097300 container remove af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:23:03 np0005604790 systemd[1]: libpod-conmon-af93b7d8d0d6631590ad135d8394c16f886d147e78671cedb200d58bc174aa0a.scope: Deactivated successfully.
Feb  2 05:23:03 np0005604790 nova_compute[252672]: 2026-02-02 10:23:03.640 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.104393525 +0000 UTC m=+0.045984460 container create eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:23:04 np0005604790 systemd[1]: Started libpod-conmon-eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090.scope.
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.080405319 +0000 UTC m=+0.021996244 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:04 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.368745074 +0000 UTC m=+0.310336019 container init eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.375623856 +0000 UTC m=+0.317214771 container start eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Feb  2 05:23:04 np0005604790 lucid_poincare[289691]: 167 167
Feb  2 05:23:04 np0005604790 systemd[1]: libpod-eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090.scope: Deactivated successfully.
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.457338573 +0000 UTC m=+0.398929508 container attach eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.45838394 +0000 UTC m=+0.399974865 container died eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Feb  2 05:23:04 np0005604790 systemd[1]: var-lib-containers-storage-overlay-0fe1d2861b6ce3e9e82b01e89a73727830ed20664bb41ee1812759565ced18ec-merged.mount: Deactivated successfully.
Feb  2 05:23:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:04] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:04 np0005604790 podman[289675]: 2026-02-02 10:23:04.92678967 +0000 UTC m=+0.868380585 container remove eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Feb  2 05:23:04 np0005604790 systemd[1]: libpod-conmon-eced6cc8ce2eeb5630b5351d63f169bbe54b4e0f0cdc39666b0a277cc7632090.scope: Deactivated successfully.
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.08783594 +0000 UTC m=+0.060655499 container create 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Feb  2 05:23:05 np0005604790 systemd[1]: Started libpod-conmon-299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d.scope.
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.053196082 +0000 UTC m=+0.026015651 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:05 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d65b06004c2ea1adab63a03787f0c3c7dbc084ba978f516789f5de315ab2c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d65b06004c2ea1adab63a03787f0c3c7dbc084ba978f516789f5de315ab2c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d65b06004c2ea1adab63a03787f0c3c7dbc084ba978f516789f5de315ab2c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:05 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d65b06004c2ea1adab63a03787f0c3c7dbc084ba978f516789f5de315ab2c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.196968204 +0000 UTC m=+0.169787773 container init 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.203671091 +0000 UTC m=+0.176490640 container start 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.21568941 +0000 UTC m=+0.188508969 container attach 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:23:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]: {
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:    "1": [
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:        {
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "devices": [
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "/dev/loop3"
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            ],
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "lv_name": "ceph_lv0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "lv_size": "21470642176",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "name": "ceph_lv0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "tags": {
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.cluster_name": "ceph",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.crush_device_class": "",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.encrypted": "0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.osd_id": "1",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.type": "block",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.vdo": "0",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:                "ceph.with_tpm": "0"
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            },
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "type": "block",
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:            "vg_name": "ceph_vg0"
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:        }
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]:    ]
Feb  2 05:23:05 np0005604790 sleepy_sutherland[289734]: }
Feb  2 05:23:05 np0005604790 systemd[1]: libpod-299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d.scope: Deactivated successfully.
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.510087735 +0000 UTC m=+0.482907324 container died 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb  2 05:23:05 np0005604790 systemd[1]: var-lib-containers-storage-overlay-68d65b06004c2ea1adab63a03787f0c3c7dbc084ba978f516789f5de315ab2c5-merged.mount: Deactivated successfully.
Feb  2 05:23:05 np0005604790 podman[289717]: 2026-02-02 10:23:05.616791204 +0000 UTC m=+0.589610753 container remove 299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_sutherland, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Feb  2 05:23:05 np0005604790 systemd[1]: libpod-conmon-299c642f45df03e3d1bc117f0724f40858149b4368a78b55308dca741cab2d8d.scope: Deactivated successfully.
Feb  2 05:23:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.178276671 +0000 UTC m=+0.042645282 container create 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb  2 05:23:06 np0005604790 systemd[1]: Started libpod-conmon-52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f.scope.
Feb  2 05:23:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.158324482 +0000 UTC m=+0.022693113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.26310971 +0000 UTC m=+0.127478401 container init 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.271373389 +0000 UTC m=+0.135742040 container start 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:23:06 np0005604790 sad_poitras[289863]: 167 167
Feb  2 05:23:06 np0005604790 systemd[1]: libpod-52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f.scope: Deactivated successfully.
Feb  2 05:23:06 np0005604790 conmon[289863]: conmon 52b337204f1bc42efe4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f.scope/container/memory.events
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.282257258 +0000 UTC m=+0.146625949 container attach 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.283336767 +0000 UTC m=+0.147705378 container died 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb  2 05:23:06 np0005604790 systemd[1]: var-lib-containers-storage-overlay-5629aafb22d24cb54524fe3a07586ab0025c10337db01973198e3374fcd2215a-merged.mount: Deactivated successfully.
Feb  2 05:23:06 np0005604790 podman[289848]: 2026-02-02 10:23:06.375713106 +0000 UTC m=+0.240081727 container remove 52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:23:06 np0005604790 systemd[1]: libpod-conmon-52b337204f1bc42efe4af524ee29ef677a056d0a74acddcb8750ff9b9ea73a0f.scope: Deactivated successfully.
Feb  2 05:23:06 np0005604790 podman[289891]: 2026-02-02 10:23:06.518840131 +0000 UTC m=+0.053311385 container create d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb  2 05:23:06 np0005604790 systemd[1]: Started libpod-conmon-d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49.scope.
Feb  2 05:23:06 np0005604790 podman[289891]: 2026-02-02 10:23:06.490267553 +0000 UTC m=+0.024738807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:23:06 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:23:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221737aadeeac79f8b3df5df05a9469e095efc149b64ebb2c398944a5e537086/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221737aadeeac79f8b3df5df05a9469e095efc149b64ebb2c398944a5e537086/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221737aadeeac79f8b3df5df05a9469e095efc149b64ebb2c398944a5e537086/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:06 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/221737aadeeac79f8b3df5df05a9469e095efc149b64ebb2c398944a5e537086/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:23:06 np0005604790 podman[289891]: 2026-02-02 10:23:06.65533553 +0000 UTC m=+0.189806794 container init d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:23:06 np0005604790 podman[289891]: 2026-02-02 10:23:06.663606969 +0000 UTC m=+0.198078203 container start d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:23:06 np0005604790 podman[289891]: 2026-02-02 10:23:06.670324557 +0000 UTC m=+0.204795821 container attach d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:23:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:07.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:07 np0005604790 lvm[289982]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:23:07 np0005604790 lvm[289982]: VG ceph_vg0 finished
Feb  2 05:23:07 np0005604790 reverent_tesla[289907]: {}
Feb  2 05:23:07 np0005604790 systemd[1]: libpod-d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49.scope: Deactivated successfully.
Feb  2 05:23:07 np0005604790 systemd[1]: libpod-d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49.scope: Consumed 1.135s CPU time.
Feb  2 05:23:07 np0005604790 podman[289891]: 2026-02-02 10:23:07.410552204 +0000 UTC m=+0.945023468 container died d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:23:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:07 np0005604790 systemd[1]: var-lib-containers-storage-overlay-221737aadeeac79f8b3df5df05a9469e095efc149b64ebb2c398944a5e537086-merged.mount: Deactivated successfully.
Feb  2 05:23:07 np0005604790 podman[289891]: 2026-02-02 10:23:07.605515973 +0000 UTC m=+1.139987207 container remove d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:23:07 np0005604790 systemd[1]: libpod-conmon-d601d0fa7da27fb54b9275d8377211fd3933a4ced625911bae4f9ac50d905b49.scope: Deactivated successfully.
Feb  2 05:23:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:23:07 np0005604790 nova_compute[252672]: 2026-02-02 10:23:07.672 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:23:07 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:08 np0005604790 nova_compute[252672]: 2026-02-02 10:23:08.642 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:08 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:08 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:23:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:08.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:09.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:09 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:11.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:11 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 605 B/s rd, 0 op/s
Feb  2 05:23:12 np0005604790 nova_compute[252672]: 2026-02-02 10:23:12.738 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:13 np0005604790 nova_compute[252672]: 2026-02-02 10:23:13.644 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:13 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:14] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:15.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:15 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.316 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.317 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.317 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.317 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.317 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:23:16 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:23:16 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037213434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:23:16 np0005604790 nova_compute[252672]: 2026-02-02 10:23:16.812 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.006 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.008 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4469MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.008 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.008 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.167 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.168 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.187 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing inventories for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:23:17
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'backups', '.mgr', 'vms']
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:23:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:23:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:23:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:17.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:17.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.292 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating ProviderTree inventory for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.293 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Updating inventory in ProviderTree for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.313 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing aggregate associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.340 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Refreshing trait associations for resource provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004, traits: COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE,COMPUTE_VOLUME_EXTEND,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SVM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_IDE,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_CLMUL _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.367 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:23:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:17.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.742 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:23:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544513142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.855 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.863 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.897 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.899 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:23:17 np0005604790 nova_compute[252672]: 2026-02-02 10:23:17.899 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:23:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:23:18 np0005604790 nova_compute[252672]: 2026-02-02 10:23:18.646 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:18.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:23:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:18.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:23:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:18.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:23:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:19.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:19 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:20 np0005604790 podman[290083]: 2026-02-02 10:23:20.393836458 +0000 UTC m=+0.106884215 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:23:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:21 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:22 np0005604790 nova_compute[252672]: 2026-02-02 10:23:22.746 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:23.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:23.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:23 np0005604790 nova_compute[252672]: 2026-02-02 10:23:23.649 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:23 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:24 np0005604790 nova_compute[252672]: 2026-02-02 10:23:24.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:24 np0005604790 nova_compute[252672]: 2026-02-02 10:23:24.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:24 np0005604790 nova_compute[252672]: 2026-02-02 10:23:24.283 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb  2 05:23:24 np0005604790 nova_compute[252672]: 2026-02-02 10:23:24.299 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb  2 05:23:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:24] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:24] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:25 np0005604790 nova_compute[252672]: 2026-02-02 10:23:25.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:25 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:26 np0005604790 nova_compute[252672]: 2026-02-02 10:23:26.299 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:26 np0005604790 nova_compute[252672]: 2026-02-02 10:23:26.300 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:27.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.281 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:23:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.301 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.302 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.302 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:23:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:27 np0005604790 nova_compute[252672]: 2026-02-02 10:23:27.749 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:27 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:28 np0005604790 nova_compute[252672]: 2026-02-02 10:23:28.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:28 np0005604790 nova_compute[252672]: 2026-02-02 10:23:28.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:28 np0005604790 podman[290143]: 2026-02-02 10:23:28.369358108 +0000 UTC m=+0.077148946 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Feb  2 05:23:28 np0005604790 nova_compute[252672]: 2026-02-02 10:23:28.651 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:28.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:23:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:29.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:29 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:30 np0005604790 nova_compute[252672]: 2026-02-02 10:23:30.282 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:31.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:31 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:23:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:23:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:32 np0005604790 nova_compute[252672]: 2026-02-02 10:23:32.806 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:33.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:33 np0005604790 nova_compute[252672]: 2026-02-02 10:23:33.655 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:33 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:34] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:34] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:35.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:35 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:37.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:37.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:37.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:37 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:37 np0005604790 nova_compute[252672]: 2026-02-02 10:23:37.810 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:38 np0005604790 nova_compute[252672]: 2026-02-02 10:23:38.656 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:38.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:39.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:39 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:41.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:41 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:42 np0005604790 nova_compute[252672]: 2026-02-02 10:23:42.855 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:43.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:43.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:43 np0005604790 nova_compute[252672]: 2026-02-02 10:23:43.658 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:43 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:44] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:44] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Feb  2 05:23:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:23:45.399 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:23:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:23:45.399 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:23:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:23:45.399 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:23:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:45.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:45 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:46 np0005604790 nova_compute[252672]: 2026-02-02 10:23:46.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:23:46 np0005604790 nova_compute[252672]: 2026-02-02 10:23:46.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb  2 05:23:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:23:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:23:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:47.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:23:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:47.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:47 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:47 np0005604790 nova_compute[252672]: 2026-02-02 10:23:47.859 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:48 np0005604790 nova_compute[252672]: 2026-02-02 10:23:48.661 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:48.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:49.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:49 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:51 np0005604790 podman[290209]: 2026-02-02 10:23:51.365343062 +0000 UTC m=+0.084797969 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb  2 05:23:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:51.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:51 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:52 np0005604790 nova_compute[252672]: 2026-02-02 10:23:52.861 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:53.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:53 np0005604790 nova_compute[252672]: 2026-02-02 10:23:53.663 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:53 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:23:54 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:54] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:54 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:23:54] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:23:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:55 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:55 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:55 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:55 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:23:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:23:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:55 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:23:56 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:23:56 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:23:57 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:57.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:23:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:23:57 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:57 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:57 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:57.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:57 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:23:57 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:23:57 np0005604790 nova_compute[252672]: 2026-02-02 10:23:57.865 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:58 np0005604790 nova_compute[252672]: 2026-02-02 10:23:58.665 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:23:58 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:23:58.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:23:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:23:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:23:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:23:59 np0005604790 podman[290244]: 2026-02-02 10:23:59.336956158 +0000 UTC m=+0.053233262 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb  2 05:23:59 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:23:59 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:23:59 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:23:59.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:23:59 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:00 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:01 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:01 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:01.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:01 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:01 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:01 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:01.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:01 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:24:02 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:24:02 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:02 np0005604790 nova_compute[252672]: 2026-02-02 10:24:02.906 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:03 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:03 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:03 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:03 np0005604790 nova_compute[252672]: 2026-02-02 10:24:03.666 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:03 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:04 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:24:04 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:04] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:24:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:05 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:05 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:05 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:05.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:05 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:05 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:06 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:06 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:07.264Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:24:07 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:07.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:07.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:07 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:07 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:07 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:07.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:07 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:07 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:07 np0005604790 nova_compute[252672]: 2026-02-02 10:24:07.908 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 851 B/s rd, 0 op/s
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Feb  2 05:24:08 np0005604790 nova_compute[252672]: 2026-02-02 10:24:08.669 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:24:08 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:24:08 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:08.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Feb  2 05:24:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Feb  2 05:24:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:09 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.208722155 +0000 UTC m=+0.053654653 container create b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:24:09 np0005604790 systemd[1]: Started libpod-conmon-b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f.scope.
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.18097682 +0000 UTC m=+0.025909328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:09 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.295505026 +0000 UTC m=+0.140437534 container init b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.300214001 +0000 UTC m=+0.145146489 container start b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.303479758 +0000 UTC m=+0.148412246 container attach b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Feb  2 05:24:09 np0005604790 hungry_keller[290488]: 167 167
Feb  2 05:24:09 np0005604790 systemd[1]: libpod-b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f.scope: Deactivated successfully.
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.307082183 +0000 UTC m=+0.152014671 container died b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:24:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-933d86af02b631e1fce82da4e64113030d5b98da68d907a569fb45b763ae28ad-merged.mount: Deactivated successfully.
Feb  2 05:24:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:09 np0005604790 podman[290472]: 2026-02-02 10:24:09.344798683 +0000 UTC m=+0.189731171 container remove b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_keller, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Feb  2 05:24:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:09 np0005604790 systemd[1]: libpod-conmon-b8dd981577f922e993cf9f8106f575c4c9b2043ac9bdafd10cb08a2d6b3b454f.scope: Deactivated successfully.
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.495824408 +0000 UTC m=+0.048225980 container create 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:24:09 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:09 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:09 np0005604790 systemd[1]: Started libpod-conmon-218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50.scope.
Feb  2 05:24:09 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:09.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.475620662 +0000 UTC m=+0.028022284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:09 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:09 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.60866637 +0000 UTC m=+0.161067962 container init 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.618580122 +0000 UTC m=+0.170981704 container start 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.627564101 +0000 UTC m=+0.179965713 container attach 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:24:09 np0005604790 heuristic_shirley[290531]: --> passed data devices: 0 physical, 1 LVM
Feb  2 05:24:09 np0005604790 heuristic_shirley[290531]: --> All data devices are unavailable
Feb  2 05:24:09 np0005604790 systemd[1]: libpod-218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50.scope: Deactivated successfully.
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.936018828 +0000 UTC m=+0.488420440 container died 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Feb  2 05:24:09 np0005604790 systemd[1]: var-lib-containers-storage-overlay-f79fc7c2f9145bdf58630fe06a46af4412e3730fad2af9510dc4ae3ef8503535-merged.mount: Deactivated successfully.
Feb  2 05:24:09 np0005604790 podman[290515]: 2026-02-02 10:24:09.985759637 +0000 UTC m=+0.538161209 container remove 218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Feb  2 05:24:09 np0005604790 systemd[1]: libpod-conmon-218e7394e0561f9c118e74a2795547994ed5201d5fafba068c4fdea7a715ec50.scope: Deactivated successfully.
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.598676187 +0000 UTC m=+0.045851656 container create 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Feb  2 05:24:10 np0005604790 systemd[1]: Started libpod-conmon-7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1.scope.
Feb  2 05:24:10 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 567 B/s rd, 0 op/s
Feb  2 05:24:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.578210745 +0000 UTC m=+0.025386214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.68512729 +0000 UTC m=+0.132302819 container init 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.693230724 +0000 UTC m=+0.140406163 container start 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.698049562 +0000 UTC m=+0.145225041 container attach 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 05:24:10 np0005604790 tender_galois[290671]: 167 167
Feb  2 05:24:10 np0005604790 systemd[1]: libpod-7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1.scope: Deactivated successfully.
Feb  2 05:24:10 np0005604790 conmon[290671]: conmon 7520b832697c24b90825 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1.scope/container/memory.events
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.702086989 +0000 UTC m=+0.149262468 container died 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Feb  2 05:24:10 np0005604790 systemd[1]: var-lib-containers-storage-overlay-8e8ad969926d1039821fd1bc6fefbd8cf0745f4df2d1c27667d92158e6ff36b3-merged.mount: Deactivated successfully.
Feb  2 05:24:10 np0005604790 podman[290654]: 2026-02-02 10:24:10.748746856 +0000 UTC m=+0.195922295 container remove 7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:24:10 np0005604790 systemd[1]: libpod-conmon-7520b832697c24b9082585503c102a61f03fa4c71197cdb32d73151fe2d189e1.scope: Deactivated successfully.
Feb  2 05:24:10 np0005604790 podman[290694]: 2026-02-02 10:24:10.922791291 +0000 UTC m=+0.045096157 container create 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:24:10 np0005604790 systemd[1]: Started libpod-conmon-009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087.scope.
Feb  2 05:24:10 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce90f61c482ef90f484dce5bf014ee3dabf89afd57c93a202ce722da45f6eebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce90f61c482ef90f484dce5bf014ee3dabf89afd57c93a202ce722da45f6eebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:10 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce90f61c482ef90f484dce5bf014ee3dabf89afd57c93a202ce722da45f6eebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:11 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce90f61c482ef90f484dce5bf014ee3dabf89afd57c93a202ce722da45f6eebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:11 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:11 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:11 np0005604790 podman[290694]: 2026-02-02 10:24:10.905003969 +0000 UTC m=+0.027308935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:11 np0005604790 podman[290694]: 2026-02-02 10:24:11.036857625 +0000 UTC m=+0.159162551 container init 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb  2 05:24:11 np0005604790 podman[290694]: 2026-02-02 10:24:11.047724634 +0000 UTC m=+0.170029520 container start 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:24:11 np0005604790 podman[290694]: 2026-02-02 10:24:11.05738628 +0000 UTC m=+0.179691166 container attach 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]: {
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:    "1": [
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:        {
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "devices": [
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "/dev/loop3"
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            ],
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "lv_name": "ceph_lv0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "lv_size": "21470642176",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d241d473-9fcb-5f74-b163-f1ca4454e7f1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=fabfc705-a3af-416c-81a4-3fd4d777fb5f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "lv_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "name": "ceph_lv0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "tags": {
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.block_uuid": "lj33Zf-B0ba-TfOd-9onW-Kq61-RI0X-y2nN5a",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.cephx_lockbox_secret": "",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.cluster_fsid": "d241d473-9fcb-5f74-b163-f1ca4454e7f1",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.cluster_name": "ceph",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.crush_device_class": "",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.encrypted": "0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.osd_fsid": "fabfc705-a3af-416c-81a4-3fd4d777fb5f",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.osd_id": "1",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.osdspec_affinity": "default_drive_group",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.type": "block",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.vdo": "0",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:                "ceph.with_tpm": "0"
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            },
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "type": "block",
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:            "vg_name": "ceph_vg0"
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:        }
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]:    ]
Feb  2 05:24:11 np0005604790 flamboyant_lehmann[290710]: }
Feb  2 05:24:11 np0005604790 systemd[1]: libpod-009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087.scope: Deactivated successfully.
Feb  2 05:24:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:11 np0005604790 podman[290719]: 2026-02-02 10:24:11.366853305 +0000 UTC m=+0.036680254 container died 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Feb  2 05:24:11 np0005604790 systemd[1]: var-lib-containers-storage-overlay-ce90f61c482ef90f484dce5bf014ee3dabf89afd57c93a202ce722da45f6eebc-merged.mount: Deactivated successfully.
Feb  2 05:24:11 np0005604790 podman[290719]: 2026-02-02 10:24:11.475121046 +0000 UTC m=+0.144947905 container remove 009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb  2 05:24:11 np0005604790 systemd[1]: libpod-conmon-009f6b5509f53e901363d6ca6d2168d2cf02faedc683f9d5c9d28975dc5b3087.scope: Deactivated successfully.
Feb  2 05:24:11 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:11 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:11 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:11.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.03611602 +0000 UTC m=+0.041575313 container create 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:24:12 np0005604790 systemd[1]: Started libpod-conmon-848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142.scope.
Feb  2 05:24:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.099384087 +0000 UTC m=+0.104843430 container init 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.103335492 +0000 UTC m=+0.108794795 container start 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.107196514 +0000 UTC m=+0.112655827 container attach 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:24:12 np0005604790 exciting_lamport[290844]: 167 167
Feb  2 05:24:12 np0005604790 systemd[1]: libpod-848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142.scope: Deactivated successfully.
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.109631629 +0000 UTC m=+0.115090922 container died 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.023548237 +0000 UTC m=+0.029007550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:12 np0005604790 systemd[1]: var-lib-containers-storage-overlay-b1eb89273723cdec86a9d87b9b7050dd819facae16ff3b42c1c56ff6b40e8fe4-merged.mount: Deactivated successfully.
Feb  2 05:24:12 np0005604790 podman[290828]: 2026-02-02 10:24:12.156656676 +0000 UTC m=+0.162116009 container remove 848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb  2 05:24:12 np0005604790 systemd[1]: libpod-conmon-848dc93cda9557f239cd4c71f3a2a0105c0319e4ba7e14dc1ba3d399ae689142.scope: Deactivated successfully.
Feb  2 05:24:12 np0005604790 podman[290871]: 2026-02-02 10:24:12.347769783 +0000 UTC m=+0.087599604 container create 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb  2 05:24:12 np0005604790 podman[290871]: 2026-02-02 10:24:12.293321099 +0000 UTC m=+0.033150970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Feb  2 05:24:12 np0005604790 systemd[1]: Started libpod-conmon-2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb.scope.
Feb  2 05:24:12 np0005604790 systemd[1]: Started libcrun container.
Feb  2 05:24:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fc210fd1a7d459dcdfa35fe58c22b8659262809c7e28b8d8a865e13479f766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fc210fd1a7d459dcdfa35fe58c22b8659262809c7e28b8d8a865e13479f766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fc210fd1a7d459dcdfa35fe58c22b8659262809c7e28b8d8a865e13479f766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:12 np0005604790 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96fc210fd1a7d459dcdfa35fe58c22b8659262809c7e28b8d8a865e13479f766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb  2 05:24:12 np0005604790 podman[290871]: 2026-02-02 10:24:12.532966493 +0000 UTC m=+0.272796394 container init 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Feb  2 05:24:12 np0005604790 podman[290871]: 2026-02-02 10:24:12.540712819 +0000 UTC m=+0.280542680 container start 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb  2 05:24:12 np0005604790 podman[290871]: 2026-02-02 10:24:12.587990692 +0000 UTC m=+0.327820523 container attach 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb  2 05:24:12 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 567 B/s rd, 0 op/s
Feb  2 05:24:12 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:12 np0005604790 nova_compute[252672]: 2026-02-02 10:24:12.960 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:13 np0005604790 lvm[290961]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:24:13 np0005604790 lvm[290961]: VG ceph_vg0 finished
Feb  2 05:24:13 np0005604790 peaceful_lewin[290887]: {}
Feb  2 05:24:13 np0005604790 systemd[1]: libpod-2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb.scope: Deactivated successfully.
Feb  2 05:24:13 np0005604790 podman[290871]: 2026-02-02 10:24:13.337372701 +0000 UTC m=+1.077202552 container died 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb  2 05:24:13 np0005604790 systemd[1]: libpod-2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb.scope: Consumed 1.138s CPU time.
Feb  2 05:24:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:13 np0005604790 systemd[1]: var-lib-containers-storage-overlay-96fc210fd1a7d459dcdfa35fe58c22b8659262809c7e28b8d8a865e13479f766-merged.mount: Deactivated successfully.
Feb  2 05:24:13 np0005604790 podman[290871]: 2026-02-02 10:24:13.385844587 +0000 UTC m=+1.125674408 container remove 2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_lewin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb  2 05:24:13 np0005604790 systemd[1]: libpod-conmon-2a81d223795487067ed66f2d6b051fab157fee0d4360f4f93536cc39825dbbeb.scope: Deactivated successfully.
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: log_channel(audit) log [INF] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:13 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:13 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:13 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:13 np0005604790 nova_compute[252672]: 2026-02-02 10:24:13.670 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:13 np0005604790 ceph-mon[74489]: from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' 
Feb  2 05:24:14 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 567 B/s rd, 0 op/s
Feb  2 05:24:14 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:24:14 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:14] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Feb  2 05:24:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:15.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:15 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:15 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:15 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:15.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:15 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:16 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:16 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:16 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 567 B/s rd, 0 op/s
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Optimize plan auto_2026-02-02_10:24:17
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] do_upmap
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', '.nfs', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [balancer INFO root] prepared 0/10 upmap changes
Feb  2 05:24:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:24:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:24:17 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:17.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:17.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.366 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.388 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.389 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.389 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.389 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.390 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:24:17 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:17 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:17 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:17.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:24:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:17 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:24:17 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566599006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.873 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:24:17 np0005604790 nova_compute[252672]: 2026-02-02 10:24:17.964 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] _maybe_adjust
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb  2 05:24:17 np0005604790 ceph-mgr[74785]: [rbd_support INFO root] load_schedules: images, start_after=
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.017 252676 WARNING nova.virt.libvirt.driver [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.018 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4485MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.018 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.019 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.071 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.072 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.090 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb  2 05:24:18 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb  2 05:24:18 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818587934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.515 252676 DEBUG oslo_concurrency.processutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.522 252676 DEBUG nova.compute.provider_tree [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed in ProviderTree for provider: 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.540 252676 DEBUG nova.scheduler.client.report [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Inventory has not changed for provider 9e3db6fc-2145-4f13-bc7c-f7ae57d4e004 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.544 252676 DEBUG nova.compute.resource_tracker [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.545 252676 DEBUG oslo_concurrency.lockutils [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:24:18 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 851 B/s rd, 0 op/s
Feb  2 05:24:18 np0005604790 nova_compute[252672]: 2026-02-02 10:24:18.708 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:18 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:18.912Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:19.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:19 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:19 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:19 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:20 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:20 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:21 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:21 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:21.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:21 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:21 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:21 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:22 np0005604790 podman[291056]: 2026-02-02 10:24:22.418430512 +0000 UTC m=+0.122422946 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:24:22 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:22 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:22 np0005604790 nova_compute[252672]: 2026-02-02 10:24:22.967 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:23.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:23 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:23 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:23 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:23 np0005604790 nova_compute[252672]: 2026-02-02 10:24:23.710 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:24 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:24 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:24] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:24 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:24] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:25 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:25 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:25 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:25 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:26 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:26 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:26 np0005604790 nova_compute[252672]: 2026-02-02 10:24:26.463 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:26 np0005604790 nova_compute[252672]: 2026-02-02 10:24:26.464 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:26 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:27 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:27.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:27 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:27 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:27 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:27.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:27 np0005604790 systemd-logind[793]: New session 59 of user zuul.
Feb  2 05:24:27 np0005604790 systemd[1]: Started Session 59 of User zuul.
Feb  2 05:24:27 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:27 np0005604790 nova_compute[252672]: 2026-02-02 10:24:27.970 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.278 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.281 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.316 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.317 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:28 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1357: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:28 np0005604790 nova_compute[252672]: 2026-02-02 10:24:28.711 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:28 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:28.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:29 np0005604790 nova_compute[252672]: 2026-02-02 10:24:29.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:29 np0005604790 nova_compute[252672]: 2026-02-02 10:24:29.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:29 np0005604790 nova_compute[252672]: 2026-02-02 10:24:29.282 252676 DEBUG nova.compute.manager [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb  2 05:24:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:29 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:29 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:29 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:29 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28321 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:29 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18723 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28409 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28327 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:30 np0005604790 nova_compute[252672]: 2026-02-02 10:24:30.281 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:30 np0005604790 podman[291304]: 2026-02-02 10:24:30.370757499 +0000 UTC m=+0.084323356 container health_status 29bea7eb4976451e3dfdb1e9a3aba30b10e64cbce131bf8d09548b4a224f8ee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb  2 05:24:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18735 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:30 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1358: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:30 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28421 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:30 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:31 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:31 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:31 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Feb  2 05:24:31 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120963382' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Feb  2 05:24:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:31.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:31 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:31 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:31 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:31.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:24:32 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:24:32 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1359: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:32 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:33 np0005604790 nova_compute[252672]: 2026-02-02 10:24:33.027 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:33 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:33 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:33 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:33 np0005604790 nova_compute[252672]: 2026-02-02 10:24:33.716 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:34 np0005604790 ovs-vsctl[291467]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb  2 05:24:34 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1360: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:34 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28351 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:34 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:34] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:34 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:34] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:34 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb  2 05:24:35 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb  2 05:24:35 np0005604790 virtqemud[252362]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb  2 05:24:35 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28363 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:24:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:24:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:35.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:35 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: cache status {prefix=cache status} (starting...)
Feb  2 05:24:35 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:35 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:35 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:35 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:35.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:35 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28375 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:35 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28445 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:35 np0005604790 lvm[291802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb  2 05:24:35 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: client ls {prefix=client ls} (starting...)
Feb  2 05:24:35 np0005604790 lvm[291802]: VG ceph_vg0 finished
Feb  2 05:24:35 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:35 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:24:35 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:35 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:36 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:36 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28469 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28390 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18783 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: damage ls {prefix=damage ls} (starting...)
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28481 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Feb  2 05:24:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226803775' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump loads {prefix=dump loads} (starting...)
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1361: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18804 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:36 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb  2 05:24:36 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850324762' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28502 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb  2 05:24:36 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:36 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28438 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:37 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18825 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3622334637' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:37.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:37.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:37 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28444 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3860226350' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18843 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:37 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:37 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:37 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:37.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:37 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28541 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115935709' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: ops {prefix=ops} (starting...)
Feb  2 05:24:37 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb  2 05:24:37 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188552163' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28568 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:38 np0005604790 nova_compute[252672]: 2026-02-02 10:24:38.073 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18870 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:38 np0005604790 nova_compute[252672]: 2026-02-02 10:24:38.279 252676 DEBUG oslo_service.periodic_task [None req-b3336581-c94b-455a-8165-5acde3177123 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb  2 05:24:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:24:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:24:38 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 05:24:38 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726403783' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb  2 05:24:38 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: session ls {prefix=session ls} (starting...)
Feb  2 05:24:38 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw Can't run that command on an inactive MDS!
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18888 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:38 np0005604790 ceph-mds[96761]: mds.cephfs.compute-0.clmmzw asok_command: status {prefix=status} (starting...)
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1362: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:38 np0005604790 nova_compute[252672]: 2026-02-02 10:24:38.716 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28516 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:24:38.781+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:38 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:38.914Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:24:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:38.915Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:24:38 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:38.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644854955' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3225960705' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Feb  2 05:24:39 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28634 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:39 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:24:39.312+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:39 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959502888' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb  2 05:24:39 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:39 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:39 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:39.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:39 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28549 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb  2 05:24:39 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503603035' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590607965' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28667 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.18969 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: 2026-02-02T10:24:40.114+0000 7f38b3f31640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28570 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350123171' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4059925757' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1363: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28706 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28588 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 05:24:40 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157816002' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:40 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:41 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:41 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb  2 05:24:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2105064198' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28712 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28615 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19008 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28733 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:41 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:41 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:41.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:41 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb  2 05:24:41 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271174169' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28630 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19023 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:41 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28757 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb  2 05:24:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951791200' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 614400 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 589824 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 663552 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911390 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 647168 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.760246277s of 97.782371521s, submitted: 2
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 630784 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 622592 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f16bfac00 session 0x564f16908780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1687b800 session 0x564f16cb43c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912902 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 606208 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.736358643s of 27.743860245s, submitted: 1
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914414 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 598016 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 581632 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915926 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.323648453s of 10.332687378s, submitted: 2
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 565248 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 557056 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14a8f800 session 0x564f17304f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 532480 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915335 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 524288 heap: 78602240 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.660980225s of 33.682113647s, submitted: 1
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79126528 unmapped: 524288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916847 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 79142912 unmapped: 507904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916256 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1556480 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f167e8000 session 0x564f16a9cd20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f1682e400 session 0x564f16770d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915665 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 72.601402283s of 72.615432739s, submitted: 3
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917177 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9d000 session 0x564f16a82960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1458176 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 1449984 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916586 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.518285751s of 31.530691147s, submitted: 2
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918098 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1433600 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917507 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.605617523s of 32.614505768s, submitted: 2
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f16830800 session 0x564f16a94f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 1359872 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916916 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.637229919s of 55.641765594s, submitted: 1
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f14b9dc00 session 0x564f167710e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918428 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 51.884658813s of 51.889488220s, submitted: 1
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6867 writes, 27K keys, 6867 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6867 writes, 1311 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 521 writes, 817 keys, 521 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s#012Interval WAL: 521 writes, 251 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564f13165350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921452 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920861 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.083230972s of 20.095113754s, submitted: 3
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922373 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1335296 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 1310720 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.688781738s of 58.700057983s, submitted: 3
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 1245184 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921191 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922703 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1007616 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread fragmentation_score=0.000024 took=0.000080s
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 983040 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 958464 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 ms_handle_reset con 0x564f169ee800 session 0x564f17761680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 922112 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.293060303s of 72.358764648s, submitted: 253
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 921521 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x1613c0/0x20e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 925696 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fca0a000/0x0/0x4ffc00000, data 0x1634ac/0x211000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 142 ms_handle_reset con 0x564f16bf7400 session 0x564f17760d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993058 data_alloc: 218103808 data_used: 180224
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88227840 unmapped: 8208384 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.836822510s of 10.181390762s, submitted: 49
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 143 ms_handle_reset con 0x564f1682d800 session 0x564f177612c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16392192 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fbd8e000/0x0/0x4ffc00000, data 0xdd9852/0xe8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026508 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80060416 unmapped: 16375808 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fbd8e000/0x0/0x4ffc00000, data 0xdd9852/0xe8d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 16351232 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80093184 unmapped: 16343040 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029082 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 16334848 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.726776123s of 38.770790100s, submitted: 28
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fbd8b000/0x0/0x4ffc00000, data 0xddb824/0xe90000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030594 data_alloc: 218103808 data_used: 184320
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 15286272 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 15286272 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ecc00 session 0x564f167712c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f16bf5800 session 0x564f16770000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ec800 session 0x564f157b3a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81141760 unmapped: 15294464 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ec800 session 0x564f157b2b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f167ecc00 session 0x564f16883a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 15269888 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 ms_handle_reset con 0x564f16bf5800 session 0x564f14565a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 15269888 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035392 data_alloc: 218103808 data_used: 188416
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fbd87000/0x0/0x4ffc00000, data 0xddd918/0xe94000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 15261696 heap: 96436224 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f16bf7400 session 0x564f14564960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 17547264 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec000 session 0x564f16908000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec000 session 0x564f16909680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ec800 session 0x564f13f0a000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f167ecc00 session 0x564f17305680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 heartbeat osd_stat(store_statfs(0x4fbc70000/0x0/0x4ffc00000, data 0xef3a63/0xfab000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f14b9e400 session 0x564f165b54a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 ms_handle_reset con 0x564f14b9c400 session 0x564f165b4960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094874 data_alloc: 218103808 data_used: 188416
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 17539072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.541642189s of 10.736811638s, submitted: 30
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5a6000/0x0/0x4ffc00000, data 0x15bda63/0x1675000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e400 session 0x564f177434a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 17522688 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f17742d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 17219584 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 17170432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150903 data_alloc: 218103808 data_used: 7323648
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb57f000/0x0/0x4ffc00000, data 0x15e3a45/0x169d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150903 data_alloc: 218103808 data_used: 7323648
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb57f000/0x0/0x4ffc00000, data 0x15e3a45/0x169d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 89538560 unmapped: 10575872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.760478020s of 13.797927856s, submitted: 20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90898432 unmapped: 9216000 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177579 data_alloc: 218103808 data_used: 7467008
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 8560640 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb298000/0x0/0x4ffc00000, data 0x18caa45/0x1984000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91725824 unmapped: 8388608 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183745 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185257 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 8372224 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.059681892s of 13.153066635s, submitted: 20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183138 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183138 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 8634368 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687b000 session 0x564f157625a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14586000 session 0x564f15762780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8617984 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28c000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91496448 unmapped: 8617984 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.025923729s of 10.087536812s, submitted: 9
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f1576a1e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f1576a000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14586000 session 0x564f1576a780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e400 session 0x564f168832c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f1576b2c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20c000/0x0/0x4ffc00000, data 0x1956a45/0x1a10000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190221 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91521024 unmapped: 8593408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20c000/0x0/0x4ffc00000, data 0x1956a45/0x1a10000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190221 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91529216 unmapped: 8585216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfbc00 session 0x564f153de5a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91545600 unmapped: 8568832 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91365376 unmapped: 8749056 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20b000/0x0/0x4ffc00000, data 0x1956a68/0x1a11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195806 data_alloc: 218103808 data_used: 7987200
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91586560 unmapped: 8527872 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb20b000/0x0/0x4ffc00000, data 0x1956a68/0x1a11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195806 data_alloc: 218103808 data_used: 7987200
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8c00 session 0x564f17742960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.765953064s of 19.861989975s, submitted: 13
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91594752 unmapped: 8519680 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216448 data_alloc: 218103808 data_used: 7991296
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 8151040 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 8142848 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf5000 session 0x564f16a94d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216448 data_alloc: 218103808 data_used: 7991296
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 8077312 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833400 session 0x564f153def00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b4400 session 0x564f14a10b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4faf73000/0x0/0x4ffc00000, data 0x1beea68/0x1ca9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.828751564s of 10.000874519s, submitted: 36
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91856896 unmapped: 8257536 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e2c00 session 0x564f172410e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 8224768 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 8224768 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191022 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb28b000/0x0/0x4ffc00000, data 0x18d6a45/0x1990000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91897856 unmapped: 8216576 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef400 session 0x564f16cb5a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682f000 session 0x564f172412c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192142 data_alloc: 218103808 data_used: 7462912
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 8339456 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f17241680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.902652740s of 12.135910988s, submitted: 46
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061527 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 12582912 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061527 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060345 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb972000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87539712 unmapped: 12574720 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.323100090s of 15.335790634s, submitted: 3
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf5000 session 0x564f153de000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f153f2f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f16a830e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f16cb4780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f1765f4a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087666 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19038 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087666 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f13f0be00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b5800 session 0x564f13f0a3c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b5800 session 0x564f13f0a000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f16908000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87564288 unmapped: 12550144 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087798 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 12255232 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113182 data_alloc: 218103808 data_used: 3969024
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fb5d8000/0x0/0x4ffc00000, data 0x117ba35/0x1234000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 11132928 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113182 data_alloc: 218103808 data_used: 3969024
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.635030746s of 20.722810745s, submitted: 13
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 91791360 unmapped: 8323072 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90652672 unmapped: 9461760 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eef000/0x0/0x4ffc00000, data 0x16c4a35/0x177d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161968 data_alloc: 218103808 data_used: 4472832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eef000/0x0/0x4ffc00000, data 0x16c4a35/0x177d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93044736 unmapped: 7069696 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 7004160 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93110272 unmapped: 7004160 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93151232 unmapped: 6963200 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160417 data_alloc: 218103808 data_used: 4472832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93167616 unmapped: 6946816 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161025 data_alloc: 218103808 data_used: 4534272
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93184000 unmapped: 6930432 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.114570618s of 26.307558060s, submitted: 56
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9eec000/0x0/0x4ffc00000, data 0x16c7a35/0x1780000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 93241344 unmapped: 6873088 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9c00 session 0x564f14564960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16a82960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167edc00 session 0x564f173043c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067155 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 9617408 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.828050613s of 29.944892883s, submitted: 30
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f1576ba40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 9609216 heap: 100114432 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f16771860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16882000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833c00 session 0x564f16eabe00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682e800 session 0x564f16908780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f15433a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f168e41e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129453 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16a9c3c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7d000/0x0/0x4ffc00000, data 0x1636a35/0x16ef000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90300416 unmapped: 19259392 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16cb50e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ea800 session 0x564f16a82f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f16a82b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba5400 session 0x564f16eae000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90316800 unmapped: 19243008 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90316800 unmapped: 19243008 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131267 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 90324992 unmapped: 19234816 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 13615104 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190091 data_alloc: 218103808 data_used: 8843264
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 95977472 unmapped: 13582336 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190091 data_alloc: 218103808 data_used: 8843264
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 96010240 unmapped: 13549568 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.334577560s of 19.459070206s, submitted: 16
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9f7c000/0x0/0x4ffc00000, data 0x1636a45/0x16f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,1,2])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102375424 unmapped: 7184384 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9594000/0x0/0x4ffc00000, data 0x201ea45/0x20d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278225 data_alloc: 234881024 data_used: 9875456
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 7143424 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9594000/0x0/0x4ffc00000, data 0x201ea45/0x20d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271937 data_alloc: 234881024 data_used: 9879552
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 8101888 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9591000/0x0/0x4ffc00000, data 0x2021a45/0x20db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.393581390s of 13.718131065s, submitted: 87
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271234 data_alloc: 234881024 data_used: 9879552
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102514688 unmapped: 7045120 heap: 109559808 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9590000/0x0/0x4ffc00000, data 0x2022a45/0x20dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ea400 session 0x564f16eaeb40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16eaf0e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314240 data_alloc: 234881024 data_used: 9879552
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f164d9400 session 0x564f16eaf2c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16eaf4a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f16eaf680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314240 data_alloc: 234881024 data_used: 9879552
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6400 session 0x564f16eafa40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102694912 unmapped: 12156928 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 12042240 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 6676480 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355280 data_alloc: 234881024 data_used: 15953920
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 6668288 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 6651904 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x25f6a45/0x26b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 6619136 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.223644257s of 18.297273636s, submitted: 6
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fba000/0x0/0x4ffc00000, data 0x25f7a45/0x26b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355920 data_alloc: 234881024 data_used: 15953920
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 6553600 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8fba000/0x0/0x4ffc00000, data 0x25f7a45/0x26b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 6545408 heap: 114851840 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111452160 unmapped: 6119424 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439154 data_alloc: 234881024 data_used: 16728064
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 5955584 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8577000/0x0/0x4ffc00000, data 0x303ba45/0x30f5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 6750208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830c00 session 0x564f16cb50e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e8800 session 0x564f177612c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443438 data_alloc: 234881024 data_used: 16879616
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.683537483s of 13.015701294s, submitted: 49
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 7872512 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f172401e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f856f000/0x0/0x4ffc00000, data 0x3043a45/0x30fd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279090 data_alloc: 234881024 data_used: 9879552
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104267776 unmapped: 13303808 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f1576be00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f157b34a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104251392 unmapped: 13320192 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a000 session 0x564f16a82d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f958f000/0x0/0x4ffc00000, data 0x2023a45/0x20dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086693 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16530000 session 0x564f16a9d2c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f16a9c5a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16a9de00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16a9d860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 19038208 heap: 117571584 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.182113647s of 29.294746399s, submitted: 35
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a000 session 0x564f16a9c000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1682f800 session 0x564f14c30b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a5b400 session 0x564f14a30780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3000 session 0x564f16cb3a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec000 session 0x564f16cb2000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173406 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9cc7000/0x0/0x4ffc00000, data 0x18eba97/0x19a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97992704 unmapped: 33226752 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173406 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f16cb3860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97615872 unmapped: 33603584 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 33587200 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 32522240 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254751 data_alloc: 234881024 data_used: 11681792
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102064128 unmapped: 29155328 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254751 data_alloc: 234881024 data_used: 11681792
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102096896 unmapped: 29122560 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.471843719s of 17.623472214s, submitted: 36
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 23617536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ca2000/0x0/0x4ffc00000, data 0x190faba/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 22454272 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24616960 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322987 data_alloc: 234881024 data_used: 12836864
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106602496 unmapped: 24616960 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323003 data_alloc: 234881024 data_used: 12836864
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106692608 unmapped: 24526848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106700800 unmapped: 24518656 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323307 data_alloc: 234881024 data_used: 12845056
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106733568 unmapped: 24485888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 24477696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106741760 unmapped: 24477696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323307 data_alloc: 234881024 data_used: 12845056
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 24469504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106749952 unmapped: 24469504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106782720 unmapped: 24436736 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f14bdfa40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323915 data_alloc: 234881024 data_used: 12861440
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 24657920 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24649728 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106569728 unmapped: 24649728 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95f8000/0x0/0x4ffc00000, data 0x1fb3aba/0x206e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323915 data_alloc: 234881024 data_used: 12861440
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106577920 unmapped: 24641536 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831400 session 0x564f14a31a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4000 session 0x564f157b2b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f157b3e00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f157b3a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.930034637s of 31.205894470s, submitted: 99
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16833000 session 0x564f16cb5a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f16cb5c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4000 session 0x564f17760960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831400 session 0x564f17761c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf6000 session 0x564f17241c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9273000/0x0/0x4ffc00000, data 0x233cb2c/0x23f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356101 data_alloc: 234881024 data_used: 12869632
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107749376 unmapped: 23470080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357613 data_alloc: 234881024 data_used: 12869632
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9271000/0x0/0x4ffc00000, data 0x233db2c/0x23fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107757568 unmapped: 23461888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f14a06780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 23134208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108093440 unmapped: 23126016 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 20488192 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384918 data_alloc: 234881024 data_used: 16629760
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1384918 data_alloc: 234881024 data_used: 16629760
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110772224 unmapped: 20447232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 110796800 unmapped: 20422656 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.881280899s of 21.064805984s, submitted: 43
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 19095552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f924e000/0x0/0x4ffc00000, data 0x2361b2c/0x241e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406306 data_alloc: 234881024 data_used: 16834560
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 18833408 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f909c000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 17121280 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412016 data_alloc: 234881024 data_used: 16744448
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f909c000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114163712 unmapped: 17055744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.176107407s of 10.434784889s, submitted: 55
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 17006592 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16990208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 16990208 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 16973824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409960 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 16965632 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.345724106s of 15.359895706s, submitted: 14
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114262016 unmapped: 16957440 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407944 data_alloc: 234881024 data_used: 16732160
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114270208 unmapped: 16949248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 16941056 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f90b5000/0x0/0x4ffc00000, data 0x24fab2c/0x25b7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 114278400 unmapped: 16941056 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.921648979s of 12.932921410s, submitted: 2
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f16cb50e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e0800 session 0x564f153243c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f170bb860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f959a000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333008 data_alloc: 234881024 data_used: 12910592
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f959a000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333052 data_alloc: 234881024 data_used: 12910592
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.651408195s of 10.805692673s, submitted: 44
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17237800 session 0x564f16cb2960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e2400 session 0x564f14bdfe00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 19415040 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f95fd000/0x0/0x4ffc00000, data 0x1fb4aba/0x206f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f14c301e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 8845 writes, 33K keys, 8845 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8845 writes, 2156 syncs, 4.10 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1978 writes, 6320 keys, 1978 commit groups, 1.0 writes per commit group, ingest: 6.41 MB, 0.01 MB/s#012Interval WAL: 1978 writes, 845 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa458000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109408 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 27189248 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236400 session 0x564f14a065a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1000 session 0x564f16cb43c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830000 session 0x564f17305860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfb000 session 0x564f14a10780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.039520264s of 19.160972595s, submitted: 37
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9fc00 session 0x564f1659fa40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1000 session 0x564f1576a1e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830000 session 0x564f1576b680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236400 session 0x564f165b5c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bfa400 session 0x564f17304000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136117 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f173043c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa553000/0x0/0x4ffc00000, data 0x105eaa7/0x1119000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103940096 unmapped: 27279360 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136117 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103342080 unmapped: 27877376 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 103342080 unmapped: 27877376 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f16eaf0e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169eec00 session 0x564f15762780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980074883s of 10.067814827s, submitted: 28
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f16a83860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113795 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113795 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f157b4400 session 0x564f168e5860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f16a82960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16a83a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f16a832c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169eec00 session 0x564f13f0ab40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 101548032 unmapped: 29671424 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f17485c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f169ef000 session 0x564f17058960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f17058000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f168c25a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16831c00 session 0x564f16908000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d1000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177657 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fdc000/0x0/0x4ffc00000, data 0x15d7a35/0x1690000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.231132507s of 13.376890182s, submitted: 36
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102391808 unmapped: 28827648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf2800 session 0x564f169083c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102400000 unmapped: 28819456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102416384 unmapped: 28803072 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226630 data_alloc: 218103808 data_used: 7315456
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1576ba40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f14c30780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fdc000/0x0/0x4ffc00000, data 0x15d7a35/0x1690000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104357888 unmapped: 26861568 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9ec00 session 0x564f13f0a3c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7d2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102580224 unmapped: 28639232 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119456 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f145641e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9c000 session 0x564f16eaef00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f1576bc20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1659e000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.669031143s of 29.789909363s, submitted: 34
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9ec00 session 0x564f14a06000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16531000 session 0x564f14a070e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14a8fc00 session 0x564f16eaeb40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f14a06b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f165b41e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143766 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102653952 unmapped: 28565504 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e9400 session 0x564f14bdef00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d1000/0x0/0x4ffc00000, data 0x10e1a97/0x119b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 28508160 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166047 data_alloc: 218103808 data_used: 3338240
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 28573696 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.037717819s of 11.168728828s, submitted: 36
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102670336 unmapped: 28549120 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102785024 unmapped: 28434432 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [0,0,1])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102981632 unmapped: 28237824 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165999 data_alloc: 218103808 data_used: 3342336
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102612992 unmapped: 28606464 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x10e1aba/0x119c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 28336128 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102883328 unmapped: 28336128 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa41e000/0x0/0x4ffc00000, data 0x1193aba/0x124e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102899712 unmapped: 28319744 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa418000/0x0/0x4ffc00000, data 0x1199aba/0x1254000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177889 data_alloc: 218103808 data_used: 3342336
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102907904 unmapped: 28311552 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.502199173s of 19.644466400s, submitted: 280
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e3800 session 0x564f15762000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6c00 session 0x564f153250e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f14a31680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102326272 unmapped: 28893184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102334464 unmapped: 28884992 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa7cb000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126667 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236c00 session 0x564f17058960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f16eaf0e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170bec00 session 0x564f15762780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170be400 session 0x564f16a9de00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.125001907s of 19.304050446s, submitted: 47
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170be400 session 0x564f1659e000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f16a82780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f16908f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f170bec00 session 0x564f16eaaf00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17236c00 session 0x564f16a83680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7000 session 0x564f169090e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f153df860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3c00 session 0x564f14c30d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187707 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec400 session 0x564f153241e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3800 session 0x564f1576a780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102637568 unmapped: 28581888 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 102629376 unmapped: 28590080 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243984 data_alloc: 218103808 data_used: 8093696
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243984 data_alloc: 218103808 data_used: 8093696
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa031000/0x0/0x4ffc00000, data 0x157fab6/0x163b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 104218624 unmapped: 27000832 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.325160980s of 16.559110641s, submitted: 45
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f17c06c00 session 0x564f16a82b40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14587800 session 0x564f16a943c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf9c00 session 0x564f16a94d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ec400 session 0x564f16a95680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3800 session 0x564f16a954a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 106938368 unmapped: 31637504 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354235 data_alloc: 218103808 data_used: 8085504
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f931e000/0x0/0x4ffc00000, data 0x2289b18/0x2346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f1576a780
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f153250e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354251 data_alloc: 218103808 data_used: 8085504
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687b000 session 0x564f153241e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167ed000 session 0x564f14c30d20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107921408 unmapped: 30654464 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419566 data_alloc: 234881024 data_used: 18124800
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9325000/0x0/0x4ffc00000, data 0x2289b28/0x2347000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419566 data_alloc: 234881024 data_used: 18124800
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 22659072 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.397748947s of 19.836757660s, submitted: 85
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119029760 unmapped: 19546112 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8d0b000/0x0/0x4ffc00000, data 0x28a3b28/0x2961000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119668736 unmapped: 18907136 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119848960 unmapped: 18726912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 18554880 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471376 data_alloc: 234881024 data_used: 18452480
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f164d8800 session 0x564f177603c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120020992 unmapped: 18554880 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 18546688 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28c4b28/0x2982000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 18513920 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 18505728 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 120070144 unmapped: 18505728 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468656 data_alloc: 234881024 data_used: 18456576
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8cea000/0x0/0x4ffc00000, data 0x28c4b28/0x2982000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119611392 unmapped: 18964480 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.684499741s of 12.516972542s, submitted: 75
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18456576
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 18915328 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468960 data_alloc: 234881024 data_used: 18456576
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 18882560 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 119701504 unmapped: 18874368 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e4c00 session 0x564f16908f00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830400 session 0x564f172412c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f8ce5000/0x0/0x4ffc00000, data 0x28c8b28/0x2986000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d400 session 0x564f14565e00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287747 data_alloc: 218103808 data_used: 8089600
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9a8c000/0x0/0x4ffc00000, data 0x18f8ab6/0x19b4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 25100288 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.688279152s of 13.011224747s, submitted: 41
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9e000 session 0x564f173045a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e6400 session 0x564f170ba3c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 31236096 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9cc00 session 0x564f16908000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 31186944 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: mgrc ms_handle_reset ms_handle_reset con 0x564f157b4000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1282799344
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1282799344,v1:192.168.122.100:6801/1282799344]
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: mgrc handle_mgr_configure stats_period=5
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167eb400 session 0x564f14a06960
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14ba1800 session 0x564f1576ab40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149791 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f1687a400 session 0x564f15433860
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7c00 session 0x564f15433680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f15433a40
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f154330e0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.805261612s of 30.015821457s, submitted: 61
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f154334a0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f15432000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f154332c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbc000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7c00 session 0x564f15433c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830800 session 0x564f16a94000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 31096832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f15762000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 31088640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f1659e000
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e1800 session 0x564f16a83680
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f167e7400 session 0x564f16eaaf00
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151591 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 31105024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9fbb000/0x0/0x4ffc00000, data 0xde1a44/0xe9b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.881891251s of 18.886398315s, submitted: 1
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107520000 unmapped: 31055872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107536384 unmapped: 31039488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160433 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160433 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107544576 unmapped: 31031296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16830400 session 0x564f157b32c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f16bf3400 session 0x564f17485c20
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3b0000/0x0/0x4ffc00000, data 0xdf2a44/0xeac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.277494431s of 10.306180954s, submitted: 8
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 ms_handle_reset con 0x564f14b9d800 session 0x564f177423c0
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107560960 unmapped: 31014912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 31006720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 31006720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107577344 unmapped: 30998528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107585536 unmapped: 30990336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107593728 unmapped: 30982144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30973952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107618304 unmapped: 30957568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 30941184 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107880448 unmapped: 30695424 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30965760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 31023104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'log dump' '{prefix=log dump}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf dump' '{prefix=perf dump}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf schema' '{prefix=perf schema}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108142592 unmapped: 30433280 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108150784 unmapped: 30425088 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108167168 unmapped: 30408704 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28784 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108175360 unmapped: 30400512 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108183552 unmapped: 30392320 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108191744 unmapped: 30384128 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108199936 unmapped: 30375936 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108208128 unmapped: 30367744 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108216320 unmapped: 30359552 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108224512 unmapped: 30351360 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 30343168 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108240896 unmapped: 30334976 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108249088 unmapped: 30326784 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 30318592 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108265472 unmapped: 30310400 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108273664 unmapped: 30302208 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108281856 unmapped: 30294016 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108290048 unmapped: 30285824 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 30277632 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 30277632 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108298240 unmapped: 30277632 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108306432 unmapped: 30269440 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108314624 unmapped: 30261248 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108322816 unmapped: 30253056 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 30588928 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 30588928 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 30588928 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 30588928 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107986944 unmapped: 30588928 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30580736 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30580736 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30580736 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30580736 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30580736 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2786 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1420 writes, 3779 keys, 1420 commit groups, 1.0 writes per commit group, ingest: 2.63 MB, 0.00 MB/s#012Interval WAL: 1420 writes, 630 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108003328 unmapped: 30572544 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108011520 unmapped: 30564352 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 30547968 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 30539776 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 30531584 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 30531584 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 30531584 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 30531584 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 30531584 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108052480 unmapped: 30523392 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 30515200 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108068864 unmapped: 30507008 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 30498816 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108077056 unmapped: 30498816 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 440.022277832s of 440.081878662s, submitted: 13
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 30490624 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108085248 unmapped: 30490624 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108158976 unmapped: 30416896 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108445696 unmapped: 30130176 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 30097408 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 30097408 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 30097408 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 30097408 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 30072832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 30072832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 30072832 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 30064640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 30064640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 30064640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 30064640 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 30056448 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 30056448 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108527616 unmapped: 30048256 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108535808 unmapped: 30040064 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 30031872 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108552192 unmapped: 30023680 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 30015488 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 30007296 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108576768 unmapped: 29999104 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108584960 unmapped: 29990912 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 29982720 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108601344 unmapped: 29974528 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108609536 unmapped: 29966336 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108617728 unmapped: 29958144 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 29949952 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108634112 unmapped: 29941760 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154233 data_alloc: 218103808 data_used: 192512
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: osd.1 147 heartbeat osd_stat(store_statfs(0x4fa3c2000/0x0/0x4ffc00000, data 0xde1a35/0xe9a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 29933568 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}'
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 30081024 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: prioritycache tune_memory target: 4294967296 mapped: 108519424 unmapped: 30056448 heap: 138575872 old mem: 2845415832 new mem: 2845415832
Feb  2 05:24:42 np0005604790 ceph-osd[82705]: do_command 'log dump' '{prefix=log dump}'
Feb  2 05:24:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb  2 05:24:42 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856937313' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28675 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:42 np0005604790 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19065 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1364: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:42 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28805 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:42 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763930355' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.026831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883026933, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1280, "num_deletes": 258, "total_data_size": 2123744, "memory_usage": 2164096, "flush_reason": "Manual Compaction"}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883046140, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2070865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36967, "largest_seqno": 38245, "table_properties": {"data_size": 2064676, "index_size": 3328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14724, "raw_average_key_size": 20, "raw_value_size": 2051611, "raw_average_value_size": 2869, "num_data_blocks": 145, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770027779, "oldest_key_time": 1770027779, "file_creation_time": 1770027883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 19366 microseconds, and 6797 cpu microseconds.
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28696 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19083 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.046210) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2070865 bytes OK
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.046241) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.063348) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.063403) EVENT_LOG_v1 {"time_micros": 1770027883063393, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.063435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2117754, prev total WAL file size 2117754, number of live WAL files 2.
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.068286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303031' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2022KB)], [80(11MB)]
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883068380, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14271753, "oldest_snapshot_seqno": -1}
Feb  2 05:24:43 np0005604790 nova_compute[252672]: 2026-02-02 10:24:43.114 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28817 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6905 keys, 14141000 bytes, temperature: kUnknown
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883227714, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 14141000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14096477, "index_size": 26171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 181988, "raw_average_key_size": 26, "raw_value_size": 13973535, "raw_average_value_size": 2023, "num_data_blocks": 1027, "num_entries": 6905, "num_filter_entries": 6905, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1770025063, "oldest_key_time": 0, "file_creation_time": 1770027883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "07840aea-639a-4cd3-a598-1774a042b57b", "db_session_id": "W2PO4QU95YGVZQBG6TZ2", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.228007) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14141000 bytes
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.234369) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.6 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(13.7) write-amplify(6.8) OK, records in: 7435, records dropped: 530 output_compression: NoCompression
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.234428) EVENT_LOG_v1 {"time_micros": 1770027883234409, "job": 46, "event": "compaction_finished", "compaction_time_micros": 159312, "compaction_time_cpu_micros": 23108, "output_level": 6, "num_output_files": 1, "total_output_size": 14141000, "num_input_records": 7435, "num_output_records": 6905, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883234864, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: EVENT_LOG_v1 {"time_micros": 1770027883236180, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.068224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.236343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.236351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.236352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.236354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: rocksdb: (Original Log Time 2026/02/02-10:24:43.236355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb  2 05:24:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2540107102' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28717 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19098 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28829 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:43 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:43 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:43 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:43.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:43 np0005604790 nova_compute[252672]: 2026-02-02 10:24:43.718 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28726 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28850 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb  2 05:24:43 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120965683' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19131 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28868 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1365: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28883 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19149 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:44 np0005604790 ceph-mgr[74785]: [prometheus INFO cherrypy.access.139881000827776] ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:44] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:44 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-mgr-compute-0-djvyfo[74781]: ::ffff:192.168.122.100 - - [02/Feb/2026:10:24:44] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Feb  2 05:24:44 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Feb  2 05:24:44 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995767945' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Feb  2 05:24:45 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19173 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:45.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:24:45.400 165364 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb  2 05:24:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:24:45.401 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb  2 05:24:45 np0005604790 ovn_metadata_agent[165359]: 2026-02-02 10:24:45.401 165364 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb  2 05:24:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb  2 05:24:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979996278' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Feb  2 05:24:45 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:45 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:45 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:45.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:45 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19191 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:45 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb  2 05:24:45 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213294818' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Feb  2 05:24:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:45 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:46 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:46 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422127175' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278616183' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Feb  2 05:24:46 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1366: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4056548866' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb  2 05:24:46 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2340112825' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28882 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949875084' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='mgr.14760 192.168.122.100:0/1432667282' entity='mgr.compute-0.djvyfo' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Feb  2 05:24:47 np0005604790 systemd[1]: Starting Hostname Service...
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4250121613' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:47.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] scanning for idle connections..
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: [volumes INFO mgr_util] cleaning up connections: []
Feb  2 05:24:47 np0005604790 systemd[1]: Started Hostname Service.
Feb  2 05:24:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.002000054s ======
Feb  2 05:24:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:47.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28924 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:47 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:47 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:47 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:47.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28912 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324932441' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242730504' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Feb  2 05:24:47 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:47 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28939 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19287 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185207194' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Feb  2 05:24:48 np0005604790 nova_compute[252672]: 2026-02-02 10:24:48.169 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265805961' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28963 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29060 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841370334' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29069 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1367: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Feb  2 05:24:48 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.28987 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 nova_compute[252672]: 2026-02-02 10:24:48.719 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb  2 05:24:48 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039444262' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Feb  2 05:24:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:48.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Feb  2 05:24:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:48.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:24:48 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-alertmanager-compute-0[104366]: ts=2026-02-02T10:24:48.917Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29090 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19335 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29008 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb  2 05:24:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383295343' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29108 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19359 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29026 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:49 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.000000000s ======
Feb  2 05:24:49 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:49.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19368 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29126 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29041 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:24:49 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19380 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb  2 05:24:50 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883505779' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1368: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29159 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:50 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19431 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Feb  2 05:24:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Feb  2 05:24:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:50 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Feb  2 05:24:51 np0005604790 ceph-d241d473-9fcb-5f74-b163-f1ca4454e7f1-nfs-cephfs-2-0-compute-0-fdwwab[270907]: 02/02/2026 10:24:51 : epoch 6980786b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635200105' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:24:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29192 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19458 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29131 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:51 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:51 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:51 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb  2 05:24:51 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294472089' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Feb  2 05:24:51 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19476 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757286924' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb  2 05:24:52 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.19497 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Feb  2 05:24:52 np0005604790 ceph-mgr[74785]: log_channel(audit) log [DBG] : from='client.29267 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb  2 05:24:52 np0005604790 ceph-mgr[74785]: log_channel(cluster) log [DBG] : pgmap v1369: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Feb  2 05:24:52 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb  2 05:24:53 np0005604790 nova_compute[252672]: 2026-02-02 10:24:53.173 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb  2 05:24:53 np0005604790 podman[294442]: 2026-02-02 10:24:53.375128539 +0000 UTC m=+0.089392181 container health_status e39122d9482da8df802204ab6c35fb7c982874580968140e6f06bdfc8eefae36 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'db4758ee7523fe447444c4bd2b867b543b1eee4e3bbcf6676cd1b27bf6147d86-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121-65dc6d8b666d074a9e865f271939acafedbf905c57c15ae47c3f2766afb95121'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb  2 05:24:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000026s ======
Feb  2 05:24:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.100 - anonymous [02/Feb/2026:10:24:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Feb  2 05:24:53 np0005604790 ceph-mon[74489]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Feb  2 05:24:53 np0005604790 ceph-mon[74489]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725893371' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Feb  2 05:24:53 np0005604790 radosgw[89254]: ====== starting new request req=0x7f123bf7e5d0 =====
Feb  2 05:24:53 np0005604790 radosgw[89254]: ====== req done req=0x7f123bf7e5d0 op status=0 http_status=200 latency=0.001000027s ======
Feb  2 05:24:53 np0005604790 radosgw[89254]: beast: 0x7f123bf7e5d0: 192.168.122.102 - anonymous [02/Feb/2026:10:24:53.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Feb  2 05:24:53 np0005604790 nova_compute[252672]: 2026-02-02 10:24:53.721 252676 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
